Sep 29 09:37:42 crc systemd[1]: Starting Kubernetes Kubelet... Sep 29 09:37:43 crc restorecon[4659]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 29 09:37:43 crc restorecon[4659]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 29 09:37:44 crc kubenswrapper[4991]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.656609 4991 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.664926 4991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.664999 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665008 4991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665017 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665025 4991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665034 4991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665043 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665055 4991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665065 4991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665073 4991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665080 4991 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665087 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665094 4991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665100 4991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665106 4991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665113 4991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665119 4991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665125 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665131 4991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665137 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665143 4991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665149 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665156 4991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665162 4991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665180 4991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665189 4991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665198 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665205 4991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665212 4991 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665218 4991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665225 4991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665231 4991 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665237 4991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665244 4991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665272 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665283 4991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665291 4991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665298 4991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665305 4991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665311 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665318 4991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665325 4991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665333 4991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665340 4991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665347 4991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665355 4991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665363 4991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665370 4991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665377 4991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665383 4991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665389 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665396 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665402 4991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665409 4991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665416 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665422 4991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665429 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665435 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665443 4991 feature_gate.go:330] unrecognized feature gate: Example Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665450 4991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665457 4991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665464 4991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665470 4991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665477 4991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665484 4991 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665492 4991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665500 4991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665507 4991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665516 4991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665525 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.665532 4991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666705 4991 flags.go:64] FLAG: --address="0.0.0.0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666733 4991 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666751 4991 flags.go:64] FLAG: --anonymous-auth="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666762 4991 flags.go:64] FLAG: --application-metrics-count-limit="100" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666776 4991 flags.go:64] FLAG: --authentication-token-webhook="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666784 4991 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666794 4991 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666804 4991 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666813 4991 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666822 4991 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666830 4991 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666839 4991 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666847 4991 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666855 4991 flags.go:64] FLAG: --cgroup-root="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666863 4991 flags.go:64] FLAG: --cgroups-per-qos="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666872 4991 flags.go:64] FLAG: --client-ca-file="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666879 4991 flags.go:64] FLAG: --cloud-config="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666886 4991 flags.go:64] FLAG: --cloud-provider="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666894 4991 flags.go:64] FLAG: --cluster-dns="[]" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666903 4991 flags.go:64] FLAG: --cluster-domain="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666911 4991 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666919 4991 flags.go:64] FLAG: --config-dir="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666926 4991 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666935 4991 flags.go:64] FLAG: --container-log-max-files="5" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666946 4991 flags.go:64] FLAG: --container-log-max-size="10Mi" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666980 4991 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.666989 4991 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667002 4991 flags.go:64] FLAG: --containerd-namespace="k8s.io" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667010 4991 flags.go:64] FLAG: --contention-profiling="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667018 4991 flags.go:64] FLAG: --cpu-cfs-quota="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667027 4991 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667035 4991 flags.go:64] FLAG: --cpu-manager-policy="none" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667043 4991 flags.go:64] FLAG: --cpu-manager-policy-options="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667053 4991 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667061 4991 flags.go:64] FLAG: --enable-controller-attach-detach="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667069 4991 flags.go:64] FLAG: --enable-debugging-handlers="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667076 4991 flags.go:64] FLAG: --enable-load-reader="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667086 4991 flags.go:64] FLAG: --enable-server="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667094 4991 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667105 4991 flags.go:64] FLAG: --event-burst="100" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667115 4991 flags.go:64] FLAG: --event-qps="50" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667123 4991 flags.go:64] FLAG: --event-storage-age-limit="default=0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667130 4991 flags.go:64] FLAG: --event-storage-event-limit="default=0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667138 4991 flags.go:64] FLAG: --eviction-hard="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667148 4991 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667156 4991 flags.go:64] FLAG: --eviction-minimum-reclaim="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667164 4991 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667172 4991 flags.go:64] FLAG: --eviction-soft="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667181 4991 flags.go:64] FLAG: --eviction-soft-grace-period="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667189 4991 flags.go:64] FLAG: --exit-on-lock-contention="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667198 4991 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667207 4991 flags.go:64] FLAG: --experimental-mounter-path="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667216 4991 flags.go:64] FLAG: --fail-cgroupv1="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667224 4991 flags.go:64] FLAG: --fail-swap-on="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667233 4991 flags.go:64] FLAG: --feature-gates="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667243 4991 flags.go:64] FLAG: --file-check-frequency="20s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667252 4991 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667261 4991 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667269 4991 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667282 4991 flags.go:64] FLAG: --healthz-port="10248" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667290 4991 flags.go:64] FLAG: --help="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667299 4991 flags.go:64] FLAG: --hostname-override="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667308 4991 flags.go:64] FLAG: --housekeeping-interval="10s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667317 4991 flags.go:64] FLAG: --http-check-frequency="20s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667326 4991 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667334 4991 flags.go:64] FLAG: --image-credential-provider-config="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667341 4991 flags.go:64] FLAG: --image-gc-high-threshold="85" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667349 4991 flags.go:64] FLAG: --image-gc-low-threshold="80" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667357 4991 flags.go:64] FLAG: --image-service-endpoint="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667365 4991 flags.go:64] FLAG: --kernel-memcg-notification="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667372 4991 flags.go:64] FLAG: --kube-api-burst="100" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667381 4991 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667390 4991 flags.go:64] FLAG: --kube-api-qps="50" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667401 4991 flags.go:64] FLAG: --kube-reserved="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667411 4991 flags.go:64] FLAG: --kube-reserved-cgroup="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667420 4991 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667429 4991 flags.go:64] FLAG: --kubelet-cgroups="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667437 4991 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667445 4991 flags.go:64] FLAG: --lock-file="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667453 4991 flags.go:64] FLAG: --log-cadvisor-usage="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667462 4991 flags.go:64] FLAG: --log-flush-frequency="5s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667470 4991 flags.go:64] FLAG: --log-json-info-buffer-size="0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667483 4991 flags.go:64] FLAG: --log-json-split-stream="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667491 4991 flags.go:64] FLAG: --log-text-info-buffer-size="0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667498 4991 flags.go:64] FLAG: --log-text-split-stream="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667506 4991 flags.go:64] FLAG: --logging-format="text" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667514 4991 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667524 4991 flags.go:64] FLAG: --make-iptables-util-chains="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667532 4991 flags.go:64] FLAG: --manifest-url="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667540 4991 flags.go:64] FLAG: --manifest-url-header="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667551 4991 flags.go:64] FLAG: --max-housekeeping-interval="15s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667559 4991 flags.go:64] FLAG: --max-open-files="1000000" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667570 4991 flags.go:64] FLAG: --max-pods="110" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667578 4991 flags.go:64] FLAG: --maximum-dead-containers="-1" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667586 4991 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667595 4991 flags.go:64] FLAG: --memory-manager-policy="None" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667603 4991 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667611 4991 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667619 4991 flags.go:64] FLAG: --node-ip="192.168.126.11" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667627 4991 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667648 4991 flags.go:64] FLAG: --node-status-max-images="50" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667656 4991 flags.go:64] FLAG: --node-status-update-frequency="10s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667665 4991 flags.go:64] FLAG: --oom-score-adj="-999" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667673 4991 flags.go:64] FLAG: --pod-cidr="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667681 4991 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667697 4991 flags.go:64] FLAG: --pod-manifest-path="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667705 4991 flags.go:64] FLAG: --pod-max-pids="-1" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667714 4991 flags.go:64] FLAG: --pods-per-core="0" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667722 4991 flags.go:64] FLAG: --port="10250" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667732 4991 flags.go:64] FLAG: --protect-kernel-defaults="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667740 4991 flags.go:64] FLAG: --provider-id="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667748 4991 flags.go:64] FLAG: --qos-reserved="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667755 4991 flags.go:64] FLAG: --read-only-port="10255" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667764 4991 flags.go:64] FLAG: --register-node="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667771 4991 flags.go:64] FLAG: --register-schedulable="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667779 4991 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667793 4991 flags.go:64] FLAG: --registry-burst="10" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667800 4991 flags.go:64] FLAG: --registry-qps="5" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667809 4991 flags.go:64] FLAG: --reserved-cpus="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667816 4991 flags.go:64] FLAG: --reserved-memory="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667826 4991 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667834 4991 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667842 4991 flags.go:64] FLAG: --rotate-certificates="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667849 4991 flags.go:64] FLAG: --rotate-server-certificates="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667857 4991 flags.go:64] FLAG: --runonce="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667865 4991 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667873 4991 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667881 4991 flags.go:64] FLAG: --seccomp-default="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667890 4991 flags.go:64] FLAG: --serialize-image-pulls="true" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667897 4991 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667906 4991 flags.go:64] FLAG: --storage-driver-db="cadvisor" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667914 4991 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667922 4991 flags.go:64] FLAG: --storage-driver-password="root" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667930 4991 flags.go:64] FLAG: --storage-driver-secure="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667939 4991 flags.go:64] FLAG: --storage-driver-table="stats" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667970 4991 flags.go:64] FLAG: --storage-driver-user="root" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667979 4991 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667987 4991 flags.go:64] FLAG: --sync-frequency="1m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.667995 4991 flags.go:64] FLAG: --system-cgroups="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668003 4991 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668017 4991 flags.go:64] FLAG: --system-reserved-cgroup="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668026 4991 flags.go:64] FLAG: --tls-cert-file="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668037 4991 flags.go:64] FLAG: --tls-cipher-suites="[]" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668046 4991 flags.go:64] FLAG: --tls-min-version="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668054 4991 flags.go:64] FLAG: --tls-private-key-file="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668063 4991 flags.go:64] FLAG: --topology-manager-policy="none" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668071 4991 flags.go:64] FLAG: --topology-manager-policy-options="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668079 4991 flags.go:64] FLAG: --topology-manager-scope="container" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668086 4991 flags.go:64] FLAG: --v="2" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668097 4991 flags.go:64] FLAG: --version="false" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668107 4991 flags.go:64] FLAG: --vmodule="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668117 4991 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.668127 4991 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668305 4991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668318 4991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668325 4991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668332 4991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668339 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668346 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668354 4991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668362 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668370 4991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668378 4991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668384 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668390 4991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668399 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668406 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668412 4991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668418 4991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668424 4991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668431 4991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668437 4991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668447 4991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668456 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668470 4991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668479 4991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668486 4991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668493 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668500 4991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668506 4991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668516 4991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668558 4991 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668566 4991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668574 4991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668580 4991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668588 4991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668596 4991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668604 4991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668610 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668617 4991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668624 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668632 4991 feature_gate.go:330] unrecognized feature gate: Example Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668639 4991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668646 4991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668653 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668659 4991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.668666 4991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669538 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669558 4991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669568 4991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669575 4991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669583 4991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669591 4991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669600 4991 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669607 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669614 4991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669626 4991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669633 4991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669640 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669647 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669655 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669662 4991 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669669 4991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669676 4991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669683 4991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669690 4991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669697 4991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669707 4991 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669715 4991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669722 4991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669729 4991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669736 4991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669743 4991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.669750 4991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.669772 4991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.684454 4991 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.684559 4991 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684704 4991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684731 4991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684741 4991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684751 4991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684760 4991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684769 4991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684776 4991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684785 4991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684793 4991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684802 4991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684810 4991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684818 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684826 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684834 4991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684842 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684853 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684862 4991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684870 4991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684881 4991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684892 4991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684903 4991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684913 4991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684923 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684933 4991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684943 4991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684984 4991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.684995 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685005 4991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685015 4991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685025 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685036 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685044 4991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685051 4991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685060 4991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685071 4991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685079 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685087 4991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685094 4991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685102 4991 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685110 4991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685118 4991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685126 4991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685133 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685141 4991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685149 4991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685157 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685164 4991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685172 4991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685180 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685188 4991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685195 4991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685205 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685213 4991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685220 4991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685231 4991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685278 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685288 4991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685297 4991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685306 4991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685316 4991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685329 4991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685339 4991 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685348 4991 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685357 4991 feature_gate.go:330] unrecognized feature gate: Example Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685365 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685375 4991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685385 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685394 4991 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685405 4991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685415 4991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685427 4991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.685441 4991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685666 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685682 4991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685690 4991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685699 4991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685707 4991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685715 4991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685722 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685730 4991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685739 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685748 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685756 4991 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685764 4991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685772 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685779 4991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685787 4991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685796 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685807 4991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685818 4991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685826 4991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685834 4991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685843 4991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685851 4991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685858 4991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685866 4991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685874 4991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685886 4991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685895 4991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685905 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685914 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685923 4991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685933 4991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685941 4991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685974 4991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.685983 4991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686006 4991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686015 4991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686023 4991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686031 4991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686039 4991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686046 4991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686054 4991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686062 4991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686070 4991 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686078 4991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686088 4991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686098 4991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686106 4991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686115 4991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686123 4991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686130 4991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686138 4991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686146 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686156 4991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686165 4991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686173 4991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686181 4991 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686189 4991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686196 4991 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686204 4991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686214 4991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686221 4991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686229 4991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686237 4991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686247 4991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686256 4991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686264 4991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686272 4991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686281 4991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686289 4991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686297 4991 feature_gate.go:330] unrecognized feature gate: Example Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.686317 4991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.686331 4991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.686666 4991 server.go:940] "Client rotation is on, will bootstrap in background" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.693494 4991 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.693646 4991 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.696377 4991 server.go:997] "Starting client certificate rotation" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.696415 4991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.696621 4991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 11:23:52.464382177 +0000 UTC Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.696812 4991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1153h46m7.767575859s for next certificate rotation Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.724906 4991 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.728006 4991 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.749523 4991 log.go:25] "Validated CRI v1 runtime API" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.791379 4991 log.go:25] "Validated CRI v1 image API" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.794529 4991 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.802895 4991 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-09-29-09-33-37-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.803002 4991 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.833342 4991 manager.go:217] Machine: {Timestamp:2025-09-29 09:37:44.83034984 +0000 UTC m=+0.686277948 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dd0721c8-e1db-4918-837b-38acccbdf1e0 BootID:d34e016c-ced3-4c1a-ac72-75b59b35ea37 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2f:a9:49 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2f:a9:49 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:78:1b:5a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1c:a7:1e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1d:90:38 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5a:2a:6a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:f7:db:d4:64:f5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:f8:78:14:1e:a6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.833764 4991 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.834001 4991 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.835612 4991 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.835916 4991 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.836126 4991 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.837070 4991 topology_manager.go:138] "Creating topology manager with none policy" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.837102 4991 container_manager_linux.go:303] "Creating device plugin manager" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.837860 4991 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.837902 4991 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.838198 4991 state_mem.go:36] "Initialized new in-memory state store" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.838352 4991 server.go:1245] "Using root directory" path="/var/lib/kubelet" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.843629 4991 kubelet.go:418] "Attempting to sync node with API server" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.843798 4991 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.843853 4991 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.843875 4991 kubelet.go:324] "Adding apiserver pod source" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.843895 4991 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.848633 4991 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.850397 4991 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.854028 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.854028 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.854180 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.854187 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.854366 4991 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856097 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856142 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856159 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856172 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856194 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856208 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856221 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856243 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856259 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856273 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856293 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.856307 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.857243 4991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.857927 4991 server.go:1280] "Started kubelet" Sep 29 09:37:44 crc systemd[1]: Started Kubernetes Kubelet. Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.860924 4991 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.863033 4991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.866083 4991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.866194 4991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1869b74b9d8251fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-29 09:37:44.85788313 +0000 UTC m=+0.713811188,LastTimestamp:2025-09-29 09:37:44.85788313 +0000 UTC m=+0.713811188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.867218 4991 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868373 4991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868456 4991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868509 4991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:06:10.277530178 +0000 UTC Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868558 4991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1187h28m25.408978464s for next certificate rotation Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868716 4991 volume_manager.go:287] "The desired_state_of_world populator starts" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868775 4991 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.868929 4991 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.869149 4991 server.go:460] "Adding debug handlers to kubelet server" Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.869285 4991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.869485 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.869547 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.184:6443: connect: connection refused" interval="200ms" Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.869556 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874063 4991 factory.go:55] Registering systemd factory Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874096 4991 factory.go:221] Registration of the systemd container factory successfully Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874473 4991 factory.go:153] Registering CRI-O factory Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874496 4991 factory.go:221] Registration of the crio container factory successfully Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874596 4991 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874631 4991 factory.go:103] Registering Raw factory Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.874653 4991 manager.go:1196] Started watching for new ooms in manager Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.877562 4991 manager.go:319] Starting recovery of all containers Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.878232 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879129 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879162 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879184 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879206 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879225 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879247 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879269 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879296 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879319 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879342 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879363 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879385 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879413 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879433 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879452 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879509 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879528 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879549 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879567 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879590 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879608 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879625 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879643 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879674 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879693 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879714 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879733 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879750 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879767 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879785 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879805 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879825 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879866 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879887 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879903 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879920 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879938 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879979 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.879996 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880013 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880033 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880053 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880072 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880090 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880108 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880126 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880145 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880165 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880184 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880206 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880223 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880248 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880269 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880291 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880311 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880333 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880353 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880372 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880389 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880418 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880435 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880453 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880470 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880489 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880505 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880522 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880537 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880554 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880571 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880587 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880603 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880618 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880633 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880649 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880665 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880683 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880698 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880713 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880731 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880750 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880769 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880787 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880806 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880825 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880843 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880861 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880878 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880897 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880912 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880932 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880975 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.880994 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881012 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881033 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881054 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881073 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881092 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881109 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881126 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881144 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881187 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881205 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881221 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881247 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881269 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881293 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881313 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881335 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881355 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881376 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881396 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881418 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881437 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881458 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881475 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881495 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881514 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881533 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881551 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881569 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881587 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881604 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881622 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881639 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881655 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881674 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881691 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881712 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881731 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881750 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881767 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881785 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881806 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881826 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881844 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881866 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881885 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881903 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881921 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881938 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.881981 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882016 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882035 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882057 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882078 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882096 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882113 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882131 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882148 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882165 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882182 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882200 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882218 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882235 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882253 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882271 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882287 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882306 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882322 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882340 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882358 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882375 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882392 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882410 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882428 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882446 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882465 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882484 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882501 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882518 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882537 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882559 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882576 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882600 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882617 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882637 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882656 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882675 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882692 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882712 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882731 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882748 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882766 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882783 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.882802 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886256 4991 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886349 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886379 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886399 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886418 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886436 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886456 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886476 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886494 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886511 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886529 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886545 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886563 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886580 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886596 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886615 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886632 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886650 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886668 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886684 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886701 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886720 4991 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886739 4991 reconstruct.go:97] "Volume reconstruction finished" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.886751 4991 reconciler.go:26] "Reconciler: start to sync state" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.897782 4991 manager.go:324] Recovery completed Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.916003 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918014 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918062 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918077 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918637 4991 cpu_manager.go:225] "Starting CPU manager" policy="none" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918652 4991 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.918670 4991 state_mem.go:36] "Initialized new in-memory state store" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.922274 4991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.924807 4991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.924897 4991 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.924966 4991 kubelet.go:2335] "Starting kubelet main sync loop" Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.925235 4991 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 29 09:37:44 crc kubenswrapper[4991]: W0929 09:37:44.925700 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.925762 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.935356 4991 policy_none.go:49] "None policy: Start" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.936288 4991 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.936377 4991 state_mem.go:35] "Initializing new in-memory state store" Sep 29 09:37:44 crc kubenswrapper[4991]: E0929 09:37:44.969394 4991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.991983 4991 manager.go:334] "Starting Device Plugin manager" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.992233 4991 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.992248 4991 server.go:79] "Starting device plugin registration server" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.992724 4991 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.992747 4991 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.993625 4991 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.993749 4991 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Sep 29 09:37:44 crc kubenswrapper[4991]: I0929 09:37:44.993766 4991 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.001843 4991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.026125 4991 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.026246 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.028720 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.028795 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.028826 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.029044 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.029157 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.029188 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030827 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030875 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030851 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.030931 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.031046 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.031068 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.031360 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.031988 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.032022 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.032036 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.032854 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.032884 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.032912 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.033071 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.033204 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.033248 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.033989 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034016 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034031 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034072 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034125 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034131 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034338 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034397 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.034968 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035003 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035019 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035245 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035265 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035274 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035281 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.035316 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.036214 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.036235 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.036244 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.071022 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.184:6443: connect: connection refused" interval="400ms" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088676 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088729 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088767 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088803 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088838 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.088871 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089025 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089145 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089196 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089233 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089285 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089321 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089358 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089396 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.089439 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.093134 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.094793 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.094868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.094889 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.094945 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.095737 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.184:6443: connect: connection refused" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190828 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190888 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190907 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190942 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.190997 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191015 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191067 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191095 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191145 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191127 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191151 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191191 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191195 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191176 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191107 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191282 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191311 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191336 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191358 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191357 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191383 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191386 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191431 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191447 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191448 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191500 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191523 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191470 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.191626 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.296744 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.301945 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.302031 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.302043 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.302075 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.302685 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.184:6443: connect: connection refused" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.367647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.395929 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.403501 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.409038 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.420004 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a02d6e5558709b730adf0ab071c84374676f906d9c86120dc8d752aa8335c501 WatchSource:0}: Error finding container a02d6e5558709b730adf0ab071c84374676f906d9c86120dc8d752aa8335c501: Status 404 returned error can't find the container with id a02d6e5558709b730adf0ab071c84374676f906d9c86120dc8d752aa8335c501 Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.430000 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.437453 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-535c330bd5580dea95932313dd7fcd22136581ae300b9e2a99f32c962cefb8a9 WatchSource:0}: Error finding container 535c330bd5580dea95932313dd7fcd22136581ae300b9e2a99f32c962cefb8a9: Status 404 returned error can't find the container with id 535c330bd5580dea95932313dd7fcd22136581ae300b9e2a99f32c962cefb8a9 Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.438654 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-970fe7393a740830ad2583c4906d7a44071ded301f36062e4f66b9e5ab085d66 WatchSource:0}: Error finding container 970fe7393a740830ad2583c4906d7a44071ded301f36062e4f66b9e5ab085d66: Status 404 returned error can't find the container with id 970fe7393a740830ad2583c4906d7a44071ded301f36062e4f66b9e5ab085d66 Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.445612 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-56d15a47d1706bd10334b5e901bcd36a66fcc9b079f64c6d687919a6e84b1d44 WatchSource:0}: Error finding container 56d15a47d1706bd10334b5e901bcd36a66fcc9b079f64c6d687919a6e84b1d44: Status 404 returned error can't find the container with id 56d15a47d1706bd10334b5e901bcd36a66fcc9b079f64c6d687919a6e84b1d44 Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.450530 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6928846858ee71494ef4a068185ba2e198345560e13319101523903aa389fac4 WatchSource:0}: Error finding container 6928846858ee71494ef4a068185ba2e198345560e13319101523903aa389fac4: Status 404 returned error can't find the container with id 6928846858ee71494ef4a068185ba2e198345560e13319101523903aa389fac4 Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.472244 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.184:6443: connect: connection refused" interval="800ms" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.703242 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.704861 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.705011 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.705039 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.705094 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.705814 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.184:6443: connect: connection refused" node="crc" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.867185 4991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:45 crc kubenswrapper[4991]: W0929 09:37:45.887206 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.887325 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.932415 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6928846858ee71494ef4a068185ba2e198345560e13319101523903aa389fac4"} Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.933821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56d15a47d1706bd10334b5e901bcd36a66fcc9b079f64c6d687919a6e84b1d44"} Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.935256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"970fe7393a740830ad2583c4906d7a44071ded301f36062e4f66b9e5ab085d66"} Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.936715 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"535c330bd5580dea95932313dd7fcd22136581ae300b9e2a99f32c962cefb8a9"} Sep 29 09:37:45 crc kubenswrapper[4991]: I0929 09:37:45.939446 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a02d6e5558709b730adf0ab071c84374676f906d9c86120dc8d752aa8335c501"} Sep 29 09:37:45 crc kubenswrapper[4991]: E0929 09:37:45.961262 4991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.184:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1869b74b9d8251fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-29 09:37:44.85788313 +0000 UTC m=+0.713811188,LastTimestamp:2025-09-29 09:37:44.85788313 +0000 UTC m=+0.713811188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 29 09:37:46 crc kubenswrapper[4991]: W0929 09:37:46.220097 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:46 crc kubenswrapper[4991]: E0929 09:37:46.220557 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:46 crc kubenswrapper[4991]: W0929 09:37:46.233508 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:46 crc kubenswrapper[4991]: E0929 09:37:46.233624 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:46 crc kubenswrapper[4991]: E0929 09:37:46.274196 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.184:6443: connect: connection refused" interval="1.6s" Sep 29 09:37:46 crc kubenswrapper[4991]: W0929 09:37:46.314758 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:46 crc kubenswrapper[4991]: E0929 09:37:46.314929 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.506149 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.507701 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.507732 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.507741 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.507763 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:46 crc kubenswrapper[4991]: E0929 09:37:46.508004 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.184:6443: connect: connection refused" node="crc" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.867223 4991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.943341 4991 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d" exitCode=0 Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.943413 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.943438 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.944582 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.944615 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.944625 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.946135 4991 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6" exitCode=0 Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.946204 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.946337 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.947820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.947855 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.947865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.949663 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.950356 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.950375 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.950385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.951806 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.951838 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.951848 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.951859 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.951909 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.952479 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.952503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.952516 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.955363 4991 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0fd8feaea0998c24f0e363798d6adfe388bb5d79ce5eeb0a7f40e30b77a1aabc" exitCode=0 Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.955463 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0fd8feaea0998c24f0e363798d6adfe388bb5d79ce5eeb0a7f40e30b77a1aabc"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.955472 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.956889 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.956933 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.956960 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.957725 4991 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56" exitCode=0 Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.957761 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56"} Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.957831 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.959796 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.959829 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:46 crc kubenswrapper[4991]: I0929 09:37:46.959839 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.444130 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.866818 4991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:47 crc kubenswrapper[4991]: E0929 09:37:47.875700 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.184:6443: connect: connection refused" interval="3.2s" Sep 29 09:37:47 crc kubenswrapper[4991]: W0929 09:37:47.930206 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.184:6443: connect: connection refused Sep 29 09:37:47 crc kubenswrapper[4991]: E0929 09:37:47.930303 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.184:6443: connect: connection refused" logger="UnhandledError" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.962870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.963010 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.963990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.964030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.964044 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.965340 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.965374 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.965384 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.965494 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.966296 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.966322 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.966331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969793 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969829 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969844 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969863 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969875 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.969995 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.970641 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.970666 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.970676 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.972562 4991 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="00a3046dbfefbaa8909a63c128c5a27ef6207aa5e845eb7730300cdf8cf6122b" exitCode=0 Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.972610 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"00a3046dbfefbaa8909a63c128c5a27ef6207aa5e845eb7730300cdf8cf6122b"} Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.972736 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.972794 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973524 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973574 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973588 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973800 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973849 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:47 crc kubenswrapper[4991]: I0929 09:37:47.973868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.109004 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.110354 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.110385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.110401 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.110428 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:48 crc kubenswrapper[4991]: E0929 09:37:48.110884 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.184:6443: connect: connection refused" node="crc" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.166895 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.349478 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.976393 4991 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e821a91ac5c32ab17886fbde135f15a8b9c0f7c695f6337db5af63a4c85fbf35" exitCode=0 Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.976511 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.976978 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977135 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e821a91ac5c32ab17886fbde135f15a8b9c0f7c695f6337db5af63a4c85fbf35"} Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977186 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977250 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977296 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977351 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977974 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.977986 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978552 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978563 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978580 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978615 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978619 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978658 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978585 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978691 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:48 crc kubenswrapper[4991]: I0929 09:37:48.978700 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.176680 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.859940 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982694 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"98bebd9ce4c0a1ce9276b31b1bca0e9abca4880cb8f5b0bb1119d894455cb340"} Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982833 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bb24736c265d76fe1afd598c2d284a31d04996b0917df81c188e6962500f78f1"} Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982859 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e045ffe2140277d6abd6d2cbe17d60b0934ccfaafbd90e212e829a0cb9947bc"} Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982870 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982880 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7872d9a39385709fc792344faa3c2cf555ab7a57fd0af6ab956252c8fff2a063"} Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982790 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982903 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99a0b0c021aea71bc0a280206a36c9576fc8084a5f7fce281c7e55c69e5fd42a"} Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982800 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.982790 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.984439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.984472 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.984484 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985263 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985275 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985522 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985552 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985564 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985610 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985656 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:49 crc kubenswrapper[4991]: I0929 09:37:49.985666 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.764564 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.777447 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.985700 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.985699 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.987984 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.988052 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.988066 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.988463 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.988713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:50 crc kubenswrapper[4991]: I0929 09:37:50.988920 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.045226 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.045520 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.047405 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.047480 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.047510 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.311701 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.313705 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.313784 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.313806 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.313843 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.793461 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.991769 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.991867 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993244 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993297 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993316 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993425 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.993457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.998377 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.998595 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.999866 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:51 crc kubenswrapper[4991]: I0929 09:37:51.999909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:51.999927 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.594356 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.860162 4991 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.860283 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.996614 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.997818 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.997908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:52 crc kubenswrapper[4991]: I0929 09:37:52.997929 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:55 crc kubenswrapper[4991]: E0929 09:37:55.002060 4991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.354947 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.355124 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.356659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.356778 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.356794 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:58 crc kubenswrapper[4991]: W0929 09:37:58.733028 4991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.733174 4991 trace.go:236] Trace[1609470555]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Sep-2025 09:37:48.731) (total time: 10001ms): Sep 29 09:37:58 crc kubenswrapper[4991]: Trace[1609470555]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:37:58.733) Sep 29 09:37:58 crc kubenswrapper[4991]: Trace[1609470555]: [10.001575349s] [10.001575349s] END Sep 29 09:37:58 crc kubenswrapper[4991]: E0929 09:37:58.733212 4991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Sep 29 09:37:58 crc kubenswrapper[4991]: I0929 09:37:58.867176 4991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.021514 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.024051 4991 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8" exitCode=255 Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.024099 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8"} Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.024273 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.026119 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.026174 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.026185 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.027015 4991 scope.go:117] "RemoveContainer" containerID="40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.158060 4991 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.158151 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.163195 4991 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 29 09:37:59 crc kubenswrapper[4991]: I0929 09:37:59.163277 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.029996 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.032536 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684"} Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.032705 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.033523 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.033555 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:00 crc kubenswrapper[4991]: I0929 09:38:00.033564 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.801399 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.802304 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.802392 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.804868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.804946 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.805017 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:01 crc kubenswrapper[4991]: I0929 09:38:01.810066 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.037271 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.037557 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.038841 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.038883 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.038905 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.039835 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.041889 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.042016 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.042046 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.060452 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.861040 4991 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 29 09:38:02 crc kubenswrapper[4991]: I0929 09:38:02.861186 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.042858 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.043016 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045071 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045180 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045185 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045227 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:03 crc kubenswrapper[4991]: I0929 09:38:03.045245 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.065248 4991 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Sep 29 09:38:04 crc kubenswrapper[4991]: E0929 09:38:04.154154 4991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.155543 4991 trace.go:236] Trace[255222970]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Sep-2025 09:37:49.310) (total time: 14845ms): Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[255222970]: ---"Objects listed" error: 14845ms (09:38:04.155) Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[255222970]: [14.845063396s] [14.845063396s] END Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.155595 4991 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.156028 4991 trace.go:236] Trace[1548638890]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Sep-2025 09:37:49.175) (total time: 14980ms): Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[1548638890]: ---"Objects listed" error: 14980ms (09:38:04.155) Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[1548638890]: [14.980352641s] [14.980352641s] END Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.156054 4991 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.159697 4991 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Sep 29 09:38:04 crc kubenswrapper[4991]: E0929 09:38:04.160301 4991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.161606 4991 trace.go:236] Trace[1144434778]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Sep-2025 09:37:51.917) (total time: 12243ms): Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[1144434778]: ---"Objects listed" error: 12243ms (09:38:04.160) Sep 29 09:38:04 crc kubenswrapper[4991]: Trace[1144434778]: [12.243542191s] [12.243542191s] END Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.161648 4991 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.859582 4991 apiserver.go:52] "Watching apiserver" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.864014 4991 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.864448 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.864914 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.865107 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.865287 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.865444 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:04 crc kubenswrapper[4991]: E0929 09:38:04.865556 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.865731 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.865817 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:04 crc kubenswrapper[4991]: E0929 09:38:04.865985 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:04 crc kubenswrapper[4991]: E0929 09:38:04.866048 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.870776 4991 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.873720 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.874059 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.874919 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.880032 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.880841 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.880849 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.882590 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.883945 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.899511 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.938885 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.957259 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.963520 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.963822 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.963902 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964107 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964211 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964296 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964375 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964025 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964208 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964417 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964446 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964569 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964465 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964661 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964739 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964888 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965294 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965446 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965556 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965692 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965727 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965739 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965759 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.964888 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965910 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.965812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966103 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966220 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966316 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966409 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966497 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966506 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966585 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966622 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966643 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966676 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966710 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966738 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966760 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966793 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966833 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966863 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966892 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966916 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966938 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.966985 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967010 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967039 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967063 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967085 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967110 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967138 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967159 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967182 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967207 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967323 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967377 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967429 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967462 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967484 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967513 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967538 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967568 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967590 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967616 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967639 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967662 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967685 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967708 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967730 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967748 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967770 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967795 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967816 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967839 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967863 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967895 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967921 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967942 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.967984 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968029 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968057 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968153 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968190 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968212 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968235 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968257 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968277 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.968414 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969004 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969067 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969341 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969424 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969470 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.973036 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.973504 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.973973 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.975266 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.975913 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.976364 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.977606 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.982994 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.978413 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.978656 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.983502 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.978861 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.979127 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.979368 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.983553 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.979848 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.979862 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.980183 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.980300 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.980444 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.980636 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.969200 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.980677 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.981151 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.981434 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.982434 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.982839 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.983352 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.979815 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.984539 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.984796 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.985001 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.984638 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.987848 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988347 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988417 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988585 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988661 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988861 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.989170 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.989338 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.993041 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.988345 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.993702 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.997120 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.999482 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.997239 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 29 09:38:04 crc kubenswrapper[4991]: I0929 09:38:04.999795 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000235 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000285 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000347 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000380 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000406 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000422 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000444 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000453 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000467 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000488 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000507 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000528 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000548 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000565 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000586 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000613 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000635 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000655 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000676 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000694 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000711 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000732 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000754 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000772 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000792 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000810 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000827 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000845 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000863 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000884 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000904 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000924 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000941 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.000983 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001003 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001021 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001037 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001056 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001074 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001091 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001111 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001130 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001149 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001168 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001188 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001206 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001227 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001244 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001261 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001279 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001295 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001314 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001331 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001346 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001363 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001380 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001396 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001413 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001430 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001449 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001464 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001481 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001498 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001513 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001531 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001549 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001565 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001583 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001600 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001619 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001635 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001653 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001675 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001698 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001725 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001751 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001773 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001789 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.001806 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.004905 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.005165 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.005265 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.005538 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.006125 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.006403 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.006806 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007071 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007282 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007367 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007527 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007612 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007698 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007780 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007852 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.007924 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008065 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008236 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008344 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008461 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008677 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008774 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008853 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.008941 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009051 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009136 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009214 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009286 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009377 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009558 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009637 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009730 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009809 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009885 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.009977 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.010062 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.010138 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.010218 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.010290 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.010362 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:04.985194 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:04.985470 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:04.985523 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.017919 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.018043 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.018711 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.019136 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.019480 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.019523 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.019902 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.019986 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.020066 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.020500 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.020770 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.021804 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.029481 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032480 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032536 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032563 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032589 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032620 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032685 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032708 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032733 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032759 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032823 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032861 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032890 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032918 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.032973 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033009 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033039 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033074 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033072 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033102 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033269 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033331 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033376 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033429 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033465 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.033991 4991 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034003 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034058 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034066 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034086 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.034148 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:05.534125653 +0000 UTC m=+21.390053681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034147 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034178 4991 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034199 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034213 4991 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034224 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034235 4991 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034244 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034254 4991 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034265 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034276 4991 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034287 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034300 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034311 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034323 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034335 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034334 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034345 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034363 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034373 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034383 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034392 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034403 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034415 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034431 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034446 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034459 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034472 4991 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034485 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034498 4991 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034510 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034523 4991 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034537 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034548 4991 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034561 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034575 4991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034587 4991 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034599 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034612 4991 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034661 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034673 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034686 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034701 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034714 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034728 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034742 4991 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034757 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034768 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034778 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034794 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034803 4991 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034813 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034826 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034836 4991 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034845 4991 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034856 4991 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034865 4991 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034876 4991 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034889 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034898 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034907 4991 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034925 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034935 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034950 4991 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034973 4991 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034982 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034991 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035000 4991 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035015 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035032 4991 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035042 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035051 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035060 4991 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035069 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035081 4991 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035091 4991 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035101 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035110 4991 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035119 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035129 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035141 4991 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035150 4991 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035159 4991 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035168 4991 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035177 4991 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035188 4991 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035197 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035206 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035215 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035225 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035234 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.034740 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035008 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035175 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035314 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035537 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035532 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035626 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.035767 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.036064 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.036083 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.036375 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.036612 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.037518 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.037566 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.037826 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.037974 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038028 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038091 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038095 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038280 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038309 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038468 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038646 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038776 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.038846 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.039611 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.040303 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.041086 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.041238 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.041306 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.041447 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.041739 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.042262 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.042498 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.044108 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.044209 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.044387 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.044844 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.044945 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.045160 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.045277 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:05.545228089 +0000 UTC m=+21.401156117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.045463 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.045641 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.045788 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.046039 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.046605 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.046586 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.047583 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.048051 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.048313 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.048341 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.048584 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.048663 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049178 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049241 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049186 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.049398 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049463 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049459 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.049473 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:05.549453197 +0000 UTC m=+21.405381225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049646 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049730 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049734 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.049901 4991 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.050287 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051019 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051020 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051461 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051606 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051910 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051925 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052012 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052047 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052156 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052292 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052532 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052740 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052771 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.051273 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.052937 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.053442 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.055852 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.056227 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.057838 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.058242 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.058811 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.059266 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.059298 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.059614 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.060374 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.060723 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.063524 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.065009 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.067809 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.070177 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.070208 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.071677 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.072316 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.072328 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.073658 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.073983 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.074377 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.074660 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.075272 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.074949 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.076308 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.076404 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.076504 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.076614 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:05.576594414 +0000 UTC m=+21.432522442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.077313 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.080828 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.080934 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.081506 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.081809 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.082006 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:05.581914356 +0000 UTC m=+21.437842554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.082550 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.083210 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.089870 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.090369 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.090908 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.092561 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.098479 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.106400 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.119034 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.131966 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136349 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136397 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136443 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136456 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136465 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136474 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136483 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136495 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136504 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136516 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136526 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136536 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136546 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136555 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136564 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136573 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136582 4991 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136592 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136600 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136609 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136618 4991 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136625 4991 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136636 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136664 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136672 4991 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136681 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136689 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136698 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136707 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136715 4991 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136725 4991 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136736 4991 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136746 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136756 4991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136766 4991 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136776 4991 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136784 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136793 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.136807 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137031 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137043 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137060 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137071 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137083 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137094 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137103 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137112 4991 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137120 4991 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137129 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137140 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137149 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137157 4991 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137167 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137176 4991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137187 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137196 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137205 4991 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137213 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137221 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137229 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137237 4991 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137245 4991 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137253 4991 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137264 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137275 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137288 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137301 4991 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137313 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137324 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137337 4991 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137347 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137356 4991 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137364 4991 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137372 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137380 4991 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137388 4991 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137396 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137404 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137412 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137420 4991 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137427 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137436 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137446 4991 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137454 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137462 4991 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137471 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137478 4991 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137486 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137499 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137507 4991 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137516 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137524 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137532 4991 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137541 4991 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137548 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137556 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137564 4991 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137577 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137619 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137629 4991 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137638 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137647 4991 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137658 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137667 4991 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137676 4991 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137725 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.137786 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.144528 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.157230 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.171098 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.179000 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.182232 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.189230 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 29 09:38:05 crc kubenswrapper[4991]: W0929 09:38:05.190027 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-30a00a16ac93e4adc6420cae529762448bcf81565432ab0771bc6ae8f3f47028 WatchSource:0}: Error finding container 30a00a16ac93e4adc6420cae529762448bcf81565432ab0771bc6ae8f3f47028: Status 404 returned error can't find the container with id 30a00a16ac93e4adc6420cae529762448bcf81565432ab0771bc6ae8f3f47028 Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.192282 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.196076 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 29 09:38:05 crc kubenswrapper[4991]: W0929 09:38:05.199049 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-02095e863bd43748fa4bd1d74d6e1ff2e1bd939599de74067f94593bacc36ef3 WatchSource:0}: Error finding container 02095e863bd43748fa4bd1d74d6e1ff2e1bd939599de74067f94593bacc36ef3: Status 404 returned error can't find the container with id 02095e863bd43748fa4bd1d74d6e1ff2e1bd939599de74067f94593bacc36ef3 Sep 29 09:38:05 crc kubenswrapper[4991]: W0929 09:38:05.211063 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f1d6792fb641db5a12c89b19b4d9593e14f57caa447f0a2e5298b8ed8c0f884c WatchSource:0}: Error finding container f1d6792fb641db5a12c89b19b4d9593e14f57caa447f0a2e5298b8ed8c0f884c: Status 404 returned error can't find the container with id f1d6792fb641db5a12c89b19b4d9593e14f57caa447f0a2e5298b8ed8c0f884c Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.369281 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hmmqv"] Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.369939 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.373053 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.373147 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.373336 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.395291 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.408792 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.421484 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.436295 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.440197 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-hosts-file\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.440258 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvn2\" (UniqueName: \"kubernetes.io/projected/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-kube-api-access-zzvn2\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.446708 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.456444 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.468164 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.540707 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.540818 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvn2\" (UniqueName: \"kubernetes.io/projected/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-kube-api-access-zzvn2\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.540967 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:06.540900086 +0000 UTC m=+22.396828114 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.541151 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-hosts-file\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.541321 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-hosts-file\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.572010 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvn2\" (UniqueName: \"kubernetes.io/projected/1dc85d59-1f68-4489-bfd3-2cf9bd80d417-kube-api-access-zzvn2\") pod \"node-resolver-hmmqv\" (UID: \"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\") " pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.641688 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.641741 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.641779 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.641799 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.641875 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.641975 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:06.641922849 +0000 UTC m=+22.497850877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642002 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642132 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:06.642103703 +0000 UTC m=+22.498031901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642012 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642170 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642189 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642012 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642227 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:06.642213745 +0000 UTC m=+22.498141953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642240 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642257 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: E0929 09:38:05.642296 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:06.642285617 +0000 UTC m=+22.498213645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:05 crc kubenswrapper[4991]: I0929 09:38:05.692789 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hmmqv" Sep 29 09:38:05 crc kubenswrapper[4991]: W0929 09:38:05.706488 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dc85d59_1f68_4489_bfd3_2cf9bd80d417.slice/crio-72f4172a3a385387e4f300574e304549fc164f82300d436a3404444a7c2b9312 WatchSource:0}: Error finding container 72f4172a3a385387e4f300574e304549fc164f82300d436a3404444a7c2b9312: Status 404 returned error can't find the container with id 72f4172a3a385387e4f300574e304549fc164f82300d436a3404444a7c2b9312 Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.070845 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hmmqv" event={"ID":"1dc85d59-1f68-4489-bfd3-2cf9bd80d417","Type":"ContainerStarted","Data":"72f4172a3a385387e4f300574e304549fc164f82300d436a3404444a7c2b9312"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.072208 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"02095e863bd43748fa4bd1d74d6e1ff2e1bd939599de74067f94593bacc36ef3"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.073805 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.073842 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"30a00a16ac93e4adc6420cae529762448bcf81565432ab0771bc6ae8f3f47028"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.075863 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.076435 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.078218 4991 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684" exitCode=255 Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.078286 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.078361 4991 scope.go:117] "RemoveContainer" containerID="40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.080995 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.081026 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.081041 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f1d6792fb641db5a12c89b19b4d9593e14f57caa447f0a2e5298b8ed8c0f884c"} Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.091336 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.111975 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.112109 4991 scope.go:117] "RemoveContainer" containerID="f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684" Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.112362 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.112800 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.117881 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sgbqk"] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.118353 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.120464 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.120494 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.121446 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.121914 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.122329 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mwncr"] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.122850 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mm67g"] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.123149 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.123159 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.127704 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.130433 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.130710 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.139901 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.140067 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.140172 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.140675 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.141574 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h4hm4"] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.147064 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.147148 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.148004 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.151305 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.151545 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.151657 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.151704 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.152407 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.156938 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.157088 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.192440 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.219673 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.233280 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.247114 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248316 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-socket-dir-parent\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248356 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-kubelet\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248378 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-etc-kubernetes\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248400 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-daemon-config\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248421 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlzr\" (UniqueName: \"kubernetes.io/projected/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-kube-api-access-mxlzr\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248440 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248455 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248478 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-hostroot\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248535 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-multus\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248624 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248729 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248766 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248809 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88ab660a-6f01-4538-946f-38cdadd0b64d-proxy-tls\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248829 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-bin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248862 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248907 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-os-release\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-netns\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.248973 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249029 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249049 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249262 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249331 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249374 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-conf-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249400 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-multus-certs\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249422 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249469 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249499 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249520 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249544 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-os-release\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249568 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-system-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249603 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249627 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249649 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxqc\" (UniqueName: \"kubernetes.io/projected/2b1e80be-d74b-4948-8121-f1ee76bf415c-kube-api-access-jcxqc\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249668 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249732 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88ab660a-6f01-4538-946f-38cdadd0b64d-rootfs\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249774 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249832 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88ab660a-6f01-4538-946f-38cdadd0b64d-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249880 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-k8s-cni-cncf-io\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249965 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-cnibin\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.249991 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250009 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tp2\" (UniqueName: \"kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250039 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-system-cni-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250072 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvl6\" (UniqueName: \"kubernetes.io/projected/88ab660a-6f01-4538-946f-38cdadd0b64d-kube-api-access-6gvl6\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250137 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cnibin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250154 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cni-binary-copy\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.250185 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.267622 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:37:58Z\\\",\\\"message\\\":\\\"W0929 09:37:47.936513 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0929 09:37:47.937248 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759138667 cert, and key in /tmp/serving-cert-2740370830/serving-signer.crt, /tmp/serving-cert-2740370830/serving-signer.key\\\\nI0929 09:37:48.127469 1 observer_polling.go:159] Starting file observer\\\\nW0929 09:37:48.131023 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0929 09:37:48.131208 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:37:48.133643 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2740370830/tls.crt::/tmp/serving-cert-2740370830/tls.key\\\\\\\"\\\\nF0929 09:37:58.520760 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.282562 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.304269 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.316788 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.334634 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351382 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351434 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351457 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351480 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-os-release\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351531 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-system-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351551 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351560 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351587 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351688 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351577 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351832 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-system-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351646 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351655 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351865 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxqc\" (UniqueName: \"kubernetes.io/projected/2b1e80be-d74b-4948-8121-f1ee76bf415c-kube-api-access-jcxqc\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.351944 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88ab660a-6f01-4538-946f-38cdadd0b64d-rootfs\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352006 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88ab660a-6f01-4538-946f-38cdadd0b64d-rootfs\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352012 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352050 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352087 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88ab660a-6f01-4538-946f-38cdadd0b64d-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352124 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-k8s-cni-cncf-io\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352141 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352158 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352219 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-k8s-cni-cncf-io\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352249 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-cnibin\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352305 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352312 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352323 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-cnibin\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352343 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gvl6\" (UniqueName: \"kubernetes.io/projected/88ab660a-6f01-4538-946f-38cdadd0b64d-kube-api-access-6gvl6\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352376 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cnibin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352411 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cni-binary-copy\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352439 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352464 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cnibin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352467 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tp2\" (UniqueName: \"kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352499 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-system-cni-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352506 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352588 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-socket-dir-parent\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352617 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-kubelet\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352643 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-etc-kubernetes\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352652 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-system-cni-dir\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352682 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-daemon-config\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352690 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-kubelet\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352709 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlzr\" (UniqueName: \"kubernetes.io/projected/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-kube-api-access-mxlzr\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352727 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-socket-dir-parent\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352737 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352765 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352799 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-hostroot\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352827 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-multus\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352851 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352878 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-etc-kubernetes\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352880 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352914 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352921 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352977 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88ab660a-6f01-4538-946f-38cdadd0b64d-proxy-tls\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352998 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-bin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353016 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353037 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353063 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-os-release\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353082 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-netns\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353098 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353129 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353150 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353166 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353186 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353206 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-conf-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353228 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-multus-certs\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353250 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-binary-copy\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353279 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-multus-certs\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-cni-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353276 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2b1e80be-d74b-4948-8121-f1ee76bf415c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353308 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-run-netns\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353326 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353332 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353333 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-bin\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353361 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-hostroot\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353361 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353374 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353397 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-host-var-lib-cni-multus\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353406 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353431 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353431 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353465 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-os-release\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353465 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353482 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-conf-dir\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353492 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-multus-daemon-config\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353550 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2b1e80be-d74b-4948-8121-f1ee76bf415c-os-release\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353617 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-cni-binary-copy\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.353873 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.352231 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.354127 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88ab660a-6f01-4538-946f-38cdadd0b64d-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.354343 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.360519 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.360544 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88ab660a-6f01-4538-946f-38cdadd0b64d-proxy-tls\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.371319 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlzr\" (UniqueName: \"kubernetes.io/projected/f36a89bf-ee7b-4bf7-bc61-9ea099661bd1-kube-api-access-mxlzr\") pod \"multus-mm67g\" (UID: \"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\") " pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.374201 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tp2\" (UniqueName: \"kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2\") pod \"ovnkube-node-h4hm4\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.374972 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.375354 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gvl6\" (UniqueName: \"kubernetes.io/projected/88ab660a-6f01-4538-946f-38cdadd0b64d-kube-api-access-6gvl6\") pod \"machine-config-daemon-sgbqk\" (UID: \"88ab660a-6f01-4538-946f-38cdadd0b64d\") " pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.378312 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxqc\" (UniqueName: \"kubernetes.io/projected/2b1e80be-d74b-4948-8121-f1ee76bf415c-kube-api-access-jcxqc\") pod \"multus-additional-cni-plugins-mwncr\" (UID: \"2b1e80be-d74b-4948-8121-f1ee76bf415c\") " pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.388822 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.405541 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.422468 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.447831 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.450763 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.461189 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mm67g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.463630 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:06Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:06 crc kubenswrapper[4991]: W0929 09:38:06.464156 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88ab660a_6f01_4538_946f_38cdadd0b64d.slice/crio-b3d7cfa04a4ff807746522f01b49782cf1195479b73a4088e801340be36ac125 WatchSource:0}: Error finding container b3d7cfa04a4ff807746522f01b49782cf1195479b73a4088e801340be36ac125: Status 404 returned error can't find the container with id b3d7cfa04a4ff807746522f01b49782cf1195479b73a4088e801340be36ac125 Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.472591 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mwncr" Sep 29 09:38:06 crc kubenswrapper[4991]: W0929 09:38:06.477362 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf36a89bf_ee7b_4bf7_bc61_9ea099661bd1.slice/crio-42bab0237864f4aba9e0f9ac93db042ea679f70019be28512d2c6df4e06e31ad WatchSource:0}: Error finding container 42bab0237864f4aba9e0f9ac93db042ea679f70019be28512d2c6df4e06e31ad: Status 404 returned error can't find the container with id 42bab0237864f4aba9e0f9ac93db042ea679f70019be28512d2c6df4e06e31ad Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.482147 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:06 crc kubenswrapper[4991]: W0929 09:38:06.491224 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b1e80be_d74b_4948_8121_f1ee76bf415c.slice/crio-71d9e1ec4e167c3de305f80bf6b01c170d31d866c80545e3c0cbd4029ba07982 WatchSource:0}: Error finding container 71d9e1ec4e167c3de305f80bf6b01c170d31d866c80545e3c0cbd4029ba07982: Status 404 returned error can't find the container with id 71d9e1ec4e167c3de305f80bf6b01c170d31d866c80545e3c0cbd4029ba07982 Sep 29 09:38:06 crc kubenswrapper[4991]: W0929 09:38:06.510531 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c96fef_6218_4f25_8f81_f7adc934b0d5.slice/crio-a3a70f4e267b6417cbbcadce1c6a83e12aa013bc0d4b6b42815c06bcc3f06dc0 WatchSource:0}: Error finding container a3a70f4e267b6417cbbcadce1c6a83e12aa013bc0d4b6b42815c06bcc3f06dc0: Status 404 returned error can't find the container with id a3a70f4e267b6417cbbcadce1c6a83e12aa013bc0d4b6b42815c06bcc3f06dc0 Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.555278 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.555593 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:08.555553547 +0000 UTC m=+24.411481585 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.656887 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.656939 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.657002 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.657026 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657118 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657126 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657194 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:08.657174574 +0000 UTC m=+24.513102602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657200 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657225 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:08.657214855 +0000 UTC m=+24.513142883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657230 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657248 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657283 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:08.657272476 +0000 UTC m=+24.513200504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657614 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657647 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657680 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.657776 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:08.657723686 +0000 UTC m=+24.513651714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.927871 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.928617 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.929178 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.929264 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.929342 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:06 crc kubenswrapper[4991]: E0929 09:38:06.929415 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.933221 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.934275 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.935016 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.935834 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.936642 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.937348 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.938180 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.938932 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.939726 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.942153 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.942806 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.944097 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.944617 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.945316 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.946409 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.947123 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.948547 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.949020 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.949572 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.950677 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.951195 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.952168 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.952652 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.953696 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.954217 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.954799 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.955844 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.956453 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.957470 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.957919 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.958751 4991 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.958850 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.960556 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.961550 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.962031 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.963480 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.964143 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.965221 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.965856 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.966926 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.967432 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.968447 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.969123 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.970139 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.970595 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.971534 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.972130 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.973339 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.973833 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.974644 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.975108 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.976005 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.976591 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Sep 29 09:38:06 crc kubenswrapper[4991]: I0929 09:38:06.977145 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.086657 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerStarted","Data":"17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.086724 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerStarted","Data":"42bab0237864f4aba9e0f9ac93db042ea679f70019be28512d2c6df4e06e31ad"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.090215 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.094546 4991 scope.go:117] "RemoveContainer" containerID="f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684" Sep 29 09:38:07 crc kubenswrapper[4991]: E0929 09:38:07.095315 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.098717 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.098761 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.098779 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"b3d7cfa04a4ff807746522f01b49782cf1195479b73a4088e801340be36ac125"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.102423 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hmmqv" event={"ID":"1dc85d59-1f68-4489-bfd3-2cf9bd80d417","Type":"ContainerStarted","Data":"d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.104004 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880" exitCode=0 Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.104113 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.104239 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"a3a70f4e267b6417cbbcadce1c6a83e12aa013bc0d4b6b42815c06bcc3f06dc0"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.111591 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.112930 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb" exitCode=0 Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.113024 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.113105 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerStarted","Data":"71d9e1ec4e167c3de305f80bf6b01c170d31d866c80545e3c0cbd4029ba07982"} Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.138003 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.157393 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.178596 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.190848 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.214132 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.226288 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.244920 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.260218 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.277200 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40bb66b79a80f8913dd6550d436f35c58fccce30cc0471da1c52632029528ee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:37:58Z\\\",\\\"message\\\":\\\"W0929 09:37:47.936513 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0929 09:37:47.937248 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759138667 cert, and key in /tmp/serving-cert-2740370830/serving-signer.crt, /tmp/serving-cert-2740370830/serving-signer.key\\\\nI0929 09:37:48.127469 1 observer_polling.go:159] Starting file observer\\\\nW0929 09:37:48.131023 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0929 09:37:48.131208 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:37:48.133643 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2740370830/tls.crt::/tmp/serving-cert-2740370830/tls.key\\\\\\\"\\\\nF0929 09:37:58.520760 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.294475 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.310183 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.328371 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.351008 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.366624 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.380468 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.396239 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.410734 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.431792 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.452765 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.472040 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.488896 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.513351 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:07 crc kubenswrapper[4991]: I0929 09:38:07.527527 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:07Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121705 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121769 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121783 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121799 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121811 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.121822 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.124813 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd" exitCode=0 Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.124879 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.127627 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1"} Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.143775 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.159558 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.180860 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.210640 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.227739 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.243476 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.263209 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.278243 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.296052 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.307745 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.319239 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.330728 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.348700 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.366387 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.381253 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.394767 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.407845 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.424663 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.442168 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.456549 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.468315 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-td7ll"] Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.468865 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.469111 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.470785 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.471317 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.471347 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.471895 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.483548 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.506267 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.519144 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.530512 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.544285 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.564001 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.577858 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.578052 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/facb5d72-dda5-4cd9-8af8-168994431934-host\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.578114 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/facb5d72-dda5-4cd9-8af8-168994431934-serviceca\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.578173 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:12.578119463 +0000 UTC m=+28.434047491 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.578371 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5n2\" (UniqueName: \"kubernetes.io/projected/facb5d72-dda5-4cd9-8af8-168994431934-kube-api-access-zl5n2\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.580582 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.595816 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.609728 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.623844 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.635094 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.651183 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.665146 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679566 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679613 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/facb5d72-dda5-4cd9-8af8-168994431934-serviceca\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679636 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679666 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679690 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5n2\" (UniqueName: \"kubernetes.io/projected/facb5d72-dda5-4cd9-8af8-168994431934-kube-api-access-zl5n2\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679709 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679731 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/facb5d72-dda5-4cd9-8af8-168994431934-host\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.679796 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/facb5d72-dda5-4cd9-8af8-168994431934-host\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.679892 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.679939 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680006 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680051 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.679962 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680091 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680031 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:12.680014796 +0000 UTC m=+28.535942824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680066 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680234 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680201 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:12.6801749 +0000 UTC m=+28.536102928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680310 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:12.680296713 +0000 UTC m=+28.536224781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.680325 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:12.680318623 +0000 UTC m=+28.536246651 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.680534 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.681582 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/facb5d72-dda5-4cd9-8af8-168994431934-serviceca\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.695471 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.700520 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5n2\" (UniqueName: \"kubernetes.io/projected/facb5d72-dda5-4cd9-8af8-168994431934-kube-api-access-zl5n2\") pod \"node-ca-td7ll\" (UID: \"facb5d72-dda5-4cd9-8af8-168994431934\") " pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.710580 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:08Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.797702 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-td7ll" Sep 29 09:38:08 crc kubenswrapper[4991]: W0929 09:38:08.817224 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfacb5d72_dda5_4cd9_8af8_168994431934.slice/crio-f2347c5d8c0c65037dcbca5dac4853dd624a9657cc5d9a2bd678845be324a0c4 WatchSource:0}: Error finding container f2347c5d8c0c65037dcbca5dac4853dd624a9657cc5d9a2bd678845be324a0c4: Status 404 returned error can't find the container with id f2347c5d8c0c65037dcbca5dac4853dd624a9657cc5d9a2bd678845be324a0c4 Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.925978 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.925978 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.926203 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.926282 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:08 crc kubenswrapper[4991]: I0929 09:38:08.926294 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:08 crc kubenswrapper[4991]: E0929 09:38:08.926526 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.134261 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0" exitCode=0 Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.134323 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0"} Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.139797 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-td7ll" event={"ID":"facb5d72-dda5-4cd9-8af8-168994431934","Type":"ContainerStarted","Data":"56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17"} Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.139883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-td7ll" event={"ID":"facb5d72-dda5-4cd9-8af8-168994431934","Type":"ContainerStarted","Data":"f2347c5d8c0c65037dcbca5dac4853dd624a9657cc5d9a2bd678845be324a0c4"} Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.153201 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.165084 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.178760 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.192673 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.209089 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.223622 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.239045 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.254032 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.274072 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.294979 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.308549 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.321605 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.333932 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.350097 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.363523 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.379689 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.395427 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.408251 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.423406 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.436509 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.462231 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.476641 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.495084 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.523779 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.546813 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.568928 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.865789 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.870619 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.878237 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.883761 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.903045 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.919454 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.934013 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.949177 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.963715 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.977212 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:09 crc kubenswrapper[4991]: I0929 09:38:09.991526 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:09Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.005686 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.019838 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.033065 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.052656 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.066200 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.081624 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.095268 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.107922 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.122803 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.137188 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.145514 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400" exitCode=0 Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.145570 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400"} Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.150053 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7"} Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.152612 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.165253 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.188645 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.205283 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.218488 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.233636 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.248095 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.261610 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.277275 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.290001 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.307817 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.320080 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.348523 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.365873 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.379508 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.396397 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.412480 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.426713 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.440835 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.454501 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.493642 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.530377 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.561092 4991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.564424 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.564473 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.564487 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.564643 4991 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.575521 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.603120 4991 kubelet_node_status.go:115] "Node was previously registered" node="crc" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.603514 4991 kubelet_node_status.go:79] "Successfully registered node" node="crc" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.605115 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.605202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.605220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.605251 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.605267 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.618903 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.622261 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.622306 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.622322 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.622341 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.622354 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.634964 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.638560 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.638604 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.638616 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.638633 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.638970 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.651833 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.655780 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.655809 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.655820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.655838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.655850 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.668175 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.672648 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.672701 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.672743 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.672764 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.672775 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.687811 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:10Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.687936 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.692206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.692258 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.692272 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.692297 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.692312 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.796128 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.796179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.796192 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.796223 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.796233 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.897677 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.898612 4991 scope.go:117] "RemoveContainer" containerID="f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684" Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.898829 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.899431 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.899463 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.899504 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.899521 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.899536 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:10Z","lastTransitionTime":"2025-09-29T09:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.926009 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.926009 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:10 crc kubenswrapper[4991]: I0929 09:38:10.926129 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.926179 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.926286 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:10 crc kubenswrapper[4991]: E0929 09:38:10.926389 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.004030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.004081 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.004096 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.004130 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.004143 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.107065 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.107132 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.107143 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.107161 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.107174 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.157717 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1" exitCode=0 Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.157786 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.176342 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.192864 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.204068 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.210937 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.211029 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.211041 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.211067 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.211079 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.229933 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.244238 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.261786 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.274859 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.288675 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.305432 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.313606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.313652 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.313665 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.313685 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.313701 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.317556 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.333031 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.356368 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.389285 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.417555 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.417602 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.417614 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.417635 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.417648 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.424339 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:11Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.521171 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.521224 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.521247 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.521266 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.521277 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.623772 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.623819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.623831 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.623846 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.623859 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.727698 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.727756 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.727769 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.727788 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.727801 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.831495 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.831574 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.831599 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.831631 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.831651 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.934674 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.934751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.934774 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.934810 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:11 crc kubenswrapper[4991]: I0929 09:38:11.934834 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:11Z","lastTransitionTime":"2025-09-29T09:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.037772 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.037819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.037829 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.037850 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.037863 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.141344 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.141403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.141414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.141431 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.141443 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.172213 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b1e80be-d74b-4948-8121-f1ee76bf415c" containerID="1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f" exitCode=0 Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.172273 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerDied","Data":"1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.200097 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.220528 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.244197 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.244242 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.244252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.244281 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.244295 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.250280 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.266162 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.286874 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.301809 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.317228 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.337035 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.349578 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.349632 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.349645 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.349664 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.349677 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.350433 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.371590 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.390818 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.406396 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.420516 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.437927 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.452693 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.452728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.452736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.452753 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.452763 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.554957 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.554992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.555000 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.555016 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.555027 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.621848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.622095 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.622060758 +0000 UTC m=+36.477988826 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.658995 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.659041 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.659056 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.659086 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.659102 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.723208 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.723289 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.723350 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.723408 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723510 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723571 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723599 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723622 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723659 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723682 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723694 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.723662764 +0000 UTC m=+36.579590822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723511 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723741 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723812 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.723730826 +0000 UTC m=+36.579658894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.723973 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.7238898 +0000 UTC m=+36.579817878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.724036 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.724020783 +0000 UTC m=+36.579948851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.762807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.762876 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.762895 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.762927 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.762981 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.867376 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.867466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.867488 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.867518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.867540 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.926364 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.926470 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.926528 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.926594 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.926689 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:12 crc kubenswrapper[4991]: E0929 09:38:12.926850 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.971659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.971735 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.971758 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.971791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:12 crc kubenswrapper[4991]: I0929 09:38:12.971816 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:12Z","lastTransitionTime":"2025-09-29T09:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.076492 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.076567 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.076586 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.076619 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.076637 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.181054 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.181156 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.181184 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.181230 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.181257 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.187406 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.187815 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.192278 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" event={"ID":"2b1e80be-d74b-4948-8121-f1ee76bf415c","Type":"ContainerStarted","Data":"bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.214871 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.218562 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.228123 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.243617 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.272238 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.284851 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.284897 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.284907 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.284927 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.284941 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.291001 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.308588 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.321382 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.344744 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.368027 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.385703 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.387395 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.387494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.387521 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.387563 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.387589 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.402260 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.416642 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.432390 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.444346 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.460392 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.474037 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.490577 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.490622 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.490633 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.490650 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.490664 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.491834 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.504648 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.520012 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.533220 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.547638 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.560519 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.573362 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.585513 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.593262 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.593359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.593383 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.593411 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.593430 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.600633 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.624779 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.638775 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.656232 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:13Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.696569 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.696660 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.696676 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.696704 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.696724 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.800069 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.800132 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.800145 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.800172 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.800186 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.903169 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.903251 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.903283 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.903313 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:13 crc kubenswrapper[4991]: I0929 09:38:13.903333 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:13Z","lastTransitionTime":"2025-09-29T09:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.006405 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.006464 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.006487 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.006515 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.006530 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.109359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.109406 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.109419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.109435 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.109445 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.195764 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.196608 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.217728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.217804 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.217824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.217852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.217874 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.238639 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.257717 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.275311 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.291978 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.314051 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.320902 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.321169 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.321273 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.321364 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.321468 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.327902 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.346277 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.363044 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.379463 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.394881 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.408282 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.420684 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.424406 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.424451 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.424466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.424489 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.424505 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.434528 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.458889 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.474409 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.527767 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.527811 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.527821 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.527838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.527849 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.630406 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.630473 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.630488 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.630513 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.630527 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.733843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.733876 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.733885 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.733899 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.733909 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.836722 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.836777 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.836791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.836811 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.836827 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.925706 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.925841 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.925893 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:14 crc kubenswrapper[4991]: E0929 09:38:14.925983 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:14 crc kubenswrapper[4991]: E0929 09:38:14.926040 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:14 crc kubenswrapper[4991]: E0929 09:38:14.926130 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.939340 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.939383 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.939392 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.939410 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.939422 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:14Z","lastTransitionTime":"2025-09-29T09:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.943880 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.958617 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.974830 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:14 crc kubenswrapper[4991]: I0929 09:38:14.993701 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.023537 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.042291 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.043115 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.043175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.043191 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.043215 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.043228 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.057277 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.072112 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.090767 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.103908 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.118430 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.129857 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.144975 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.146549 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.146590 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.146605 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.146627 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.146642 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.161327 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.199150 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.249782 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.249844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.249857 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.249877 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.249891 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.352856 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.352888 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.352898 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.352924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.352934 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.456872 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.456992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.457033 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.457072 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.457097 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.560182 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.560233 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.560246 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.560265 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.560278 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.664578 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.664650 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.664670 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.664707 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.664731 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.768336 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.768413 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.768428 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.768463 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.768477 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.872345 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.872424 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.872444 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.872469 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.872487 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.976280 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.976352 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.976369 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.976414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:15 crc kubenswrapper[4991]: I0929 09:38:15.976450 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:15Z","lastTransitionTime":"2025-09-29T09:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.079679 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.079736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.079748 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.079769 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.079782 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.182883 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.182940 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.182975 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.182995 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.183008 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.203453 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/0.log" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.206146 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9" exitCode=1 Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.206208 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.207156 4991 scope.go:117] "RemoveContainer" containerID="c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.221969 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.243488 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.256880 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.281694 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:15Z\\\",\\\"message\\\":\\\" 09:38:15.165408 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:15.165415 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:15.165452 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:15.165480 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:38:15.165497 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:15.165503 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:15.165520 6258 factory.go:656] Stopping watch factory\\\\nI0929 09:38:15.165541 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:15.165590 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:15.165593 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:15.165601 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:15.165608 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:15.165611 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:15.165619 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:38:15.165621 6258 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:15.165626 6258 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.285614 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.285685 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.285712 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.285744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.285764 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.300339 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.315817 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.330574 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.349430 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.368328 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.383494 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.388900 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.388968 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.388985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.389011 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.389032 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.399384 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.415889 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.430488 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.444853 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:16Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.492434 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.492484 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.492494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.492516 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.492527 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.595972 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.596049 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.596073 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.596107 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.596120 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.698690 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.698758 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.698773 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.698793 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.698806 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.801974 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.802027 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.802037 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.802062 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.802074 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.905367 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.905439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.905450 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.905479 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.905492 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:16Z","lastTransitionTime":"2025-09-29T09:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.925748 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.925870 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:16 crc kubenswrapper[4991]: E0929 09:38:16.925917 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:16 crc kubenswrapper[4991]: E0929 09:38:16.926104 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:16 crc kubenswrapper[4991]: I0929 09:38:16.926179 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:16 crc kubenswrapper[4991]: E0929 09:38:16.926236 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.008724 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.008772 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.008784 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.008804 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.008818 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.111308 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.111344 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.111354 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.111371 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.111383 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.213865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.213970 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.213987 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.214012 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.214028 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.215076 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/0.log" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.218895 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.219061 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.233847 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.254216 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.276381 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.294895 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.323853 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.323923 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.323941 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.323820 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:15Z\\\",\\\"message\\\":\\\" 09:38:15.165408 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:15.165415 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:15.165452 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:15.165480 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:38:15.165497 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:15.165503 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:15.165520 6258 factory.go:656] Stopping watch factory\\\\nI0929 09:38:15.165541 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:15.165590 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:15.165593 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:15.165601 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:15.165608 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:15.165611 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:15.165619 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:38:15.165621 6258 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:15.165626 6258 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.324014 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.324036 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.339402 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.356365 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.368710 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.385977 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.398899 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.414652 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427193 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427260 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427277 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427324 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.427851 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.442289 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.455743 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:17Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.530614 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.530680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.530690 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.530711 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.530722 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.634147 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.634205 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.634216 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.634240 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.634258 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.738113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.738162 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.738179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.738384 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.738396 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.843036 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.843129 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.843155 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.843190 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.843396 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.947379 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.947462 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.947485 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.947513 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:17 crc kubenswrapper[4991]: I0929 09:38:17.947539 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:17Z","lastTransitionTime":"2025-09-29T09:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.051089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.051176 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.051198 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.051227 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.051245 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.153395 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.153437 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.153454 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.153475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.153489 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.225163 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/1.log" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.225674 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/0.log" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.227680 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a" exitCode=1 Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.227727 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.227771 4991 scope.go:117] "RemoveContainer" containerID="c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.228630 4991 scope.go:117] "RemoveContainer" containerID="85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a" Sep 29 09:38:18 crc kubenswrapper[4991]: E0929 09:38:18.228826 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.249304 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.256491 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.256525 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.256534 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.256554 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.256564 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.270166 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.285275 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.301562 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.315857 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.330488 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.340755 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr"] Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.341571 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.343620 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.343631 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.353301 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:15Z\\\",\\\"message\\\":\\\" 09:38:15.165408 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:15.165415 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:15.165452 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:15.165480 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:38:15.165497 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:15.165503 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:15.165520 6258 factory.go:656] Stopping watch factory\\\\nI0929 09:38:15.165541 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:15.165590 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:15.165593 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:15.165601 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:15.165608 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:15.165611 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:15.165619 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:38:15.165621 6258 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:15.165626 6258 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.359877 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.359922 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.359933 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.359971 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.359987 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.371923 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.389549 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.412261 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.423288 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lv9\" (UniqueName: \"kubernetes.io/projected/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-kube-api-access-v6lv9\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.423411 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.423474 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-env-overrides\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.423509 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.425637 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.442397 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.459029 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.463307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.463397 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.463412 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.463435 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.463449 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.479609 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.496684 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.513080 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.524681 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.524788 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lv9\" (UniqueName: \"kubernetes.io/projected/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-kube-api-access-v6lv9\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.524853 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.524883 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-env-overrides\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.525690 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-env-overrides\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.526405 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.532392 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.534387 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.546628 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lv9\" (UniqueName: \"kubernetes.io/projected/67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731-kube-api-access-v6lv9\") pod \"ovnkube-control-plane-749d76644c-fffvr\" (UID: \"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.551188 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568037 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568105 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568119 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568140 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568155 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.568859 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.586626 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.601487 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.616161 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.631586 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.645903 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.654756 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.663348 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.671321 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.671357 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.671371 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.671391 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.671403 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.678626 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.694350 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.709477 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.731245 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7451ea38eb21412bd604180bf4f9956f24ebc96f76f38556f2c7ec92b9807b9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:15Z\\\",\\\"message\\\":\\\" 09:38:15.165408 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:15.165415 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:15.165452 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:15.165480 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:38:15.165497 6258 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:15.165503 6258 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:15.165520 6258 factory.go:656] Stopping watch factory\\\\nI0929 09:38:15.165541 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:15.165590 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:15.165593 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:15.165601 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:15.165608 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:15.165611 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:15.165619 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:38:15.165621 6258 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:15.165626 6258 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:18Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.775403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.775446 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.775457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.775474 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.775499 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.878376 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.878421 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.878432 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.878449 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.878461 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.926448 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.926515 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:18 crc kubenswrapper[4991]: E0929 09:38:18.926668 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:18 crc kubenswrapper[4991]: E0929 09:38:18.926943 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.927162 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:18 crc kubenswrapper[4991]: E0929 09:38:18.927421 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.980971 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.981037 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.981059 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.981085 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:18 crc kubenswrapper[4991]: I0929 09:38:18.981099 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:18Z","lastTransitionTime":"2025-09-29T09:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.084767 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.084816 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.084828 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.084857 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.084872 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.188151 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.188198 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.188210 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.188229 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.188241 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.233423 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/1.log" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.243294 4991 scope.go:117] "RemoveContainer" containerID="85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a" Sep 29 09:38:19 crc kubenswrapper[4991]: E0929 09:38:19.244558 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.246787 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" event={"ID":"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731","Type":"ContainerStarted","Data":"47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.246845 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" event={"ID":"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731","Type":"ContainerStarted","Data":"c0895c404a31d54d55723f0c4bd214d8135a29dfd98a9674b2aea4b429de882d"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.263089 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.278313 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.291553 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.291599 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.291611 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.291706 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.291723 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.292848 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.307486 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.325789 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.339650 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.353182 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.366342 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.378647 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.392327 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.394271 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.394328 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.394343 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.394365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.394377 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.410569 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.428860 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.447039 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.472278 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.494198 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.497325 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.497366 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.497381 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.497405 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.497420 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.600739 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.600790 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.600803 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.600821 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.600832 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.704185 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.704253 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.704267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.704289 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.704304 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.808633 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.808698 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.808717 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.808743 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.808763 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.891474 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7m5sp"] Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.892330 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:19 crc kubenswrapper[4991]: E0929 09:38:19.892447 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.909898 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.912326 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.912410 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.912431 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.912460 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.912480 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:19Z","lastTransitionTime":"2025-09-29T09:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.928330 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.936734 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvhr\" (UniqueName: \"kubernetes.io/projected/48c35818-43cb-4bbf-bf05-37fd375d0d70-kube-api-access-hzvhr\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.936839 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.942737 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.958597 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.977086 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:19 crc kubenswrapper[4991]: I0929 09:38:19.993903 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:19Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.009610 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.015770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.015834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.015847 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.015869 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.015885 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.026875 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.038120 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvhr\" (UniqueName: \"kubernetes.io/projected/48c35818-43cb-4bbf-bf05-37fd375d0d70-kube-api-access-hzvhr\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.038184 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.038354 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.038438 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:20.538412652 +0000 UTC m=+36.394340680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.043656 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.058759 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvhr\" (UniqueName: \"kubernetes.io/projected/48c35818-43cb-4bbf-bf05-37fd375d0d70-kube-api-access-hzvhr\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.062733 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.087387 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.100570 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.113267 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.118595 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.118676 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.118694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.118718 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.118735 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.128311 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.144260 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.160835 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.222624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.222672 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.222683 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.222702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.222716 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.253675 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" event={"ID":"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731","Type":"ContainerStarted","Data":"7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.274524 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.290467 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.307436 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.325421 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.326433 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.326494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.326509 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.326539 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.326554 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.341917 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.356504 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.376724 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.390651 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.414003 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.426063 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.429468 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.429503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.429513 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.429530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.429541 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.439426 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.449966 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.460329 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.473867 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.488183 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.499993 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.532269 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.532318 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.532330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.532350 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.532363 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.541851 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.542051 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.542129 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:21.542108563 +0000 UTC m=+37.398036591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.635939 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.636044 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.636062 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.636091 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.636110 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.642350 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.642575 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:38:36.642519472 +0000 UTC m=+52.498447500 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.739969 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.740038 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.740054 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.740078 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.740094 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.743713 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.743798 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.743842 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.743874 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744060 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744084 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744114 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744110 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744131 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744155 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744184 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744218 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744186 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:36.744150319 +0000 UTC m=+52.600078357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744423 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:36.744387704 +0000 UTC m=+52.600315772 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744449 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:36.744436196 +0000 UTC m=+52.600364254 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.744492 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:36.744470056 +0000 UTC m=+52.600398114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.779516 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.779559 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.779568 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.779584 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.779597 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.792639 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.797148 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.797202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.797213 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.797236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.797248 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.810869 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.814885 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.814958 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.814970 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.814990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.815000 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.833699 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.838499 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.838575 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.838596 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.838626 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.838646 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.854689 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.858975 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.859015 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.859029 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.859049 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.859061 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.871754 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:20Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.872017 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.874023 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.874073 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.874083 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.874100 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.874110 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.925918 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.925938 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.926062 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.926097 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.926192 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:20 crc kubenswrapper[4991]: E0929 09:38:20.926435 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.977360 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.977423 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.977441 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.977468 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:20 crc kubenswrapper[4991]: I0929 09:38:20.977487 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:20Z","lastTransitionTime":"2025-09-29T09:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.083264 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.083600 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.083751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.083883 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.084050 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.187834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.187894 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.187908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.187932 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.187974 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.290089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.290178 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.290204 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.290230 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.290249 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.394331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.394387 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.394399 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.394419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.394432 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.498300 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.498375 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.498391 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.498421 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.498444 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.551562 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:21 crc kubenswrapper[4991]: E0929 09:38:21.551768 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:21 crc kubenswrapper[4991]: E0929 09:38:21.551850 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:23.551828201 +0000 UTC m=+39.407756269 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.601542 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.601613 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.601632 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.601662 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.601684 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.705393 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.705475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.705493 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.705519 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.705533 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.809648 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.809713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.809733 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.809763 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.809783 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.913694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.913776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.913799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.913847 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.913874 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:21Z","lastTransitionTime":"2025-09-29T09:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:21 crc kubenswrapper[4991]: I0929 09:38:21.926164 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:21 crc kubenswrapper[4991]: E0929 09:38:21.926362 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.017001 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.017083 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.017108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.017141 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.017164 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.121606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.121694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.121710 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.121737 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.121755 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.225859 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.225938 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.225996 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.226035 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.226060 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.328991 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.329078 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.329101 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.329135 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.329157 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.432776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.432851 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.432871 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.432901 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.432926 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.537305 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.537359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.537372 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.537393 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.537406 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.693659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.693728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.693745 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.693770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.693786 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.796979 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.797047 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.797065 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.797090 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.797108 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.899904 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.900296 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.900537 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.900756 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.901000 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:22Z","lastTransitionTime":"2025-09-29T09:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.927265 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.927293 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:22 crc kubenswrapper[4991]: E0929 09:38:22.927435 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:22 crc kubenswrapper[4991]: I0929 09:38:22.927540 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:22 crc kubenswrapper[4991]: E0929 09:38:22.927550 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:22 crc kubenswrapper[4991]: E0929 09:38:22.927753 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.004636 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.004718 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.004740 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.004783 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.004813 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.108528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.108611 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.108630 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.108659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.108679 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.211374 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.211443 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.211464 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.211494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.211516 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.315177 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.315247 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.315266 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.315293 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.315312 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.418808 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.418865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.418882 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.418909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.418928 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.522154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.522553 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.522735 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.522904 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.523065 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.602285 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:23 crc kubenswrapper[4991]: E0929 09:38:23.602599 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:23 crc kubenswrapper[4991]: E0929 09:38:23.603101 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:27.603063709 +0000 UTC m=+43.458991777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.626274 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.626335 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.626358 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.626385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.626403 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.728780 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.728837 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.728848 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.728869 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.728881 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.831759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.831837 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.831857 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.831889 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.831921 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.925318 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:23 crc kubenswrapper[4991]: E0929 09:38:23.925561 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.935647 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.935736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.935767 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.935802 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:23 crc kubenswrapper[4991]: I0929 09:38:23.935826 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:23Z","lastTransitionTime":"2025-09-29T09:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.039484 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.039556 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.039574 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.039606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.039625 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.143129 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.143211 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.143229 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.143260 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.143281 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.246403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.246486 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.246513 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.246546 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.246569 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.350275 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.350382 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.350439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.350466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.350516 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.454101 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.454146 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.454156 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.454173 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.454183 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.557790 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.557867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.557887 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.557916 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.557936 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.661983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.662030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.662042 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.662063 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.662077 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.766700 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.766766 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.766785 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.766811 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.766845 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.870848 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.870913 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.870930 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.870985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.871005 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.925723 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.925723 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:24 crc kubenswrapper[4991]: E0929 09:38:24.926065 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.926174 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:24 crc kubenswrapper[4991]: E0929 09:38:24.926398 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:24 crc kubenswrapper[4991]: E0929 09:38:24.926666 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.949660 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:24Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.968907 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:24Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.974397 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.974456 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.974483 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.974511 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.974528 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:24Z","lastTransitionTime":"2025-09-29T09:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:24 crc kubenswrapper[4991]: I0929 09:38:24.991232 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:24Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.005824 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.020628 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.042270 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.062209 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.077636 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.077689 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.077702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.077720 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.077736 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.086795 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.110412 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.133350 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.149347 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.163708 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.179557 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.179599 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.179610 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.179628 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.179639 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.193839 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.209261 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.223675 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.236966 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:25Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.282156 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.282209 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.282220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.282240 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.282254 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.386581 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.386629 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.386641 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.386666 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.386680 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.490103 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.490144 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.490154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.490170 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.490180 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.594986 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.595027 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.595038 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.595056 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.595092 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.698651 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.698698 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.698709 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.698728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.698741 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.803011 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.803079 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.803096 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.803122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.803141 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.907220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.907301 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.907321 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.907354 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.907375 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:25Z","lastTransitionTime":"2025-09-29T09:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.925244 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:25 crc kubenswrapper[4991]: E0929 09:38:25.925469 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:25 crc kubenswrapper[4991]: I0929 09:38:25.926381 4991 scope.go:117] "RemoveContainer" containerID="f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.012702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.012787 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.012804 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.012829 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.012867 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.115992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.116045 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.116057 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.116077 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.116093 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.219328 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.219400 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.219416 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.219440 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.219463 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.279631 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.282668 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.283026 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.297777 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.317279 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.324111 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.324236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.324308 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.324396 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.324499 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.335086 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.351034 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.368989 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.387194 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.405036 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.427534 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.428070 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.428306 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.428331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.428358 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.428377 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.444176 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.460378 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.475395 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.509429 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.523922 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.531588 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.531689 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.531719 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.531759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.531794 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.540861 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.556469 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.576031 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:26Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.634138 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.634202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.634219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.634243 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.634257 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.743812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.743892 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.743910 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.743940 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.743987 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.847143 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.847190 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.847204 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.847224 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.847239 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.926438 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:26 crc kubenswrapper[4991]: E0929 09:38:26.926708 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.927510 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:26 crc kubenswrapper[4991]: E0929 09:38:26.927658 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.927774 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:26 crc kubenswrapper[4991]: E0929 09:38:26.927893 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.949710 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.949750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.949763 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.949783 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:26 crc kubenswrapper[4991]: I0929 09:38:26.949798 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:26Z","lastTransitionTime":"2025-09-29T09:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.052798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.052841 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.052858 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.052880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.052896 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.156225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.156297 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.156315 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.156347 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.156374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.260396 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.260478 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.260509 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.260542 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.260566 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.363585 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.363648 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.363668 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.363696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.363719 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.467732 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.467800 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.467820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.467850 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.467870 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.571918 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.572046 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.572072 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.572107 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.572131 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.647816 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:27 crc kubenswrapper[4991]: E0929 09:38:27.648190 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:27 crc kubenswrapper[4991]: E0929 09:38:27.648394 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:35.648339974 +0000 UTC m=+51.504268202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.675285 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.675355 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.675377 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.675403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.675420 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.779436 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.779821 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.779886 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.779996 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.780056 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.883580 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.883667 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.883694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.883737 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.883777 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.925272 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:27 crc kubenswrapper[4991]: E0929 09:38:27.925610 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.988061 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.988146 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.988172 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.988205 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:27 crc kubenswrapper[4991]: I0929 09:38:27.988275 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:27Z","lastTransitionTime":"2025-09-29T09:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.091372 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.091442 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.091462 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.091488 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.091508 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.195100 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.195159 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.195171 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.195203 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.195216 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.297849 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.297894 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.297906 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.297924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.297935 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.402443 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.402498 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.402507 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.402528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.402541 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.505874 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.505930 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.505946 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.505987 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.506001 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.610036 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.610911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.610984 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.611024 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.611074 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.715480 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.715554 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.715592 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.715634 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.715657 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.818992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.819124 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.819155 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.819198 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.819225 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.923233 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.923312 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.923332 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.923361 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.923384 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:28Z","lastTransitionTime":"2025-09-29T09:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.925553 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.925593 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:28 crc kubenswrapper[4991]: E0929 09:38:28.925766 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:28 crc kubenswrapper[4991]: I0929 09:38:28.925863 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:28 crc kubenswrapper[4991]: E0929 09:38:28.926074 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:28 crc kubenswrapper[4991]: E0929 09:38:28.926344 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.026781 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.026856 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.026880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.026911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.026930 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.131268 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.131373 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.131405 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.131442 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.131482 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.235180 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.235607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.235801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.236018 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.236209 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.340268 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.340830 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.341153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.341396 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.341670 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.445270 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.445345 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.445363 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.445392 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.445415 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.549362 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.549421 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.549435 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.549457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.549490 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.653166 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.653283 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.653304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.653338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.653359 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.757416 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.757497 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.757526 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.757561 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.757586 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.861014 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.861099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.861117 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.861147 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.861171 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.926319 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:29 crc kubenswrapper[4991]: E0929 09:38:29.926545 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.965396 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.965445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.965456 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.965477 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:29 crc kubenswrapper[4991]: I0929 09:38:29.965489 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:29Z","lastTransitionTime":"2025-09-29T09:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.058627 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.060779 4991 scope.go:117] "RemoveContainer" containerID="85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.067755 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.067804 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.067820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.067844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.067862 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.170518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.170606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.170645 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.170683 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.170708 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.274514 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.274566 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.274584 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.274607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.274623 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.301061 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/1.log" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.304203 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.304922 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.327197 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.345280 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.366419 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.377483 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.377535 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.377546 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.377565 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.377578 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.381655 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.395671 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.411706 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.429636 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.443654 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.461160 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.476730 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.481180 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.481245 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.481256 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.481284 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.481299 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.496667 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.520875 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.533841 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.551404 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.567304 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.581789 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:30Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.583837 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.583889 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.583903 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.583924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.583938 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.687376 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.687433 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.687444 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.687462 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.687474 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.789881 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.789911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.789920 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.789934 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.789945 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.893077 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.893133 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.893145 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.893165 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.893182 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.926032 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.926103 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.926032 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:30 crc kubenswrapper[4991]: E0929 09:38:30.926231 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:30 crc kubenswrapper[4991]: E0929 09:38:30.926340 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:30 crc kubenswrapper[4991]: E0929 09:38:30.926461 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.996317 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.996414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.996438 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.996479 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:30 crc kubenswrapper[4991]: I0929 09:38:30.996512 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:30Z","lastTransitionTime":"2025-09-29T09:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.075718 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.075761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.075770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.075787 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.075797 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.096000 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.100990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.101018 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.101028 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.101045 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.101055 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.134464 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.139812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.139855 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.139870 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.139891 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.139906 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.159437 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.169070 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.169122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.169135 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.169159 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.169174 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.189254 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.193468 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.193507 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.193521 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.193544 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.193559 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.209827 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.209971 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.211876 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.211911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.211924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.211963 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.211978 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.309687 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/2.log" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.310558 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/1.log" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314563 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" exitCode=1 Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314608 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314650 4991 scope.go:117] "RemoveContainer" containerID="85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314782 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314903 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.314927 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.315831 4991 scope.go:117] "RemoveContainer" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.316225 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.334496 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.351444 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.369340 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.385113 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.409152 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f5e3ef3cf46e68749f9b4313d53589ff897753a33335fce649ad98bcd9618a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:17Z\\\",\\\"message\\\":\\\"ler 8 for removal\\\\nI0929 09:38:17.313879 6400 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:17.313891 6400 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:17.313927 6400 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:17.313931 6400 factory.go:656] Stopping watch factory\\\\nI0929 09:38:17.313976 6400 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:17.314004 6400 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:38:17.314020 6400 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:17.314029 6400 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:38:17.314057 6400 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:17.314192 6400 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:38:17.314283 6400 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0929 09:38:17.314301 6400 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.417481 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.417523 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.417537 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.417559 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.417572 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.421768 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.432303 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.443414 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.455712 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.471046 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.483693 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.499803 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.516652 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.521415 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.521501 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.521516 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.521536 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.521548 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.531739 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.546410 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.562220 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:31Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.624485 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.624562 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.624582 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.624605 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.624622 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.727905 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.728036 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.728069 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.728104 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.728150 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.832506 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.832567 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.832577 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.832600 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.832611 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.925805 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:31 crc kubenswrapper[4991]: E0929 09:38:31.926058 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.935667 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.935753 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.935775 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.935810 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:31 crc kubenswrapper[4991]: I0929 09:38:31.935830 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:31Z","lastTransitionTime":"2025-09-29T09:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.039837 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.039911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.039929 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.039993 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.040015 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.143824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.143919 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.143932 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.143981 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.144001 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.247223 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.247305 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.247328 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.247359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.247377 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.320683 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/2.log" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.341921 4991 scope.go:117] "RemoveContainer" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" Sep 29 09:38:32 crc kubenswrapper[4991]: E0929 09:38:32.342227 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.350113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.350171 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.350190 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.350217 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.350237 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.364902 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.379383 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.394320 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.412941 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.429990 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.449250 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.454106 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.454169 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.454181 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.454219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.454238 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.466080 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.480980 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.508751 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.525374 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.541040 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.554263 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.556688 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.556738 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.556750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.556770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.556783 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.567941 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.584757 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.601644 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.615983 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:32Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.659630 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.659676 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.659687 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.659702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.659716 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.765030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.765111 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.765132 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.765162 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.765183 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.868995 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.869069 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.869095 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.869126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.869150 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.925842 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.925840 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:32 crc kubenswrapper[4991]: E0929 09:38:32.926073 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.925840 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:32 crc kubenswrapper[4991]: E0929 09:38:32.926305 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:32 crc kubenswrapper[4991]: E0929 09:38:32.926378 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.972314 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.972367 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.972388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.972414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:32 crc kubenswrapper[4991]: I0929 09:38:32.972429 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:32Z","lastTransitionTime":"2025-09-29T09:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.092716 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.092774 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.092788 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.092813 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.092832 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.196680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.196744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.196761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.196785 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.196807 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.299669 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.299744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.299761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.299786 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.299838 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.402622 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.402684 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.402700 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.402725 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.402748 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.506453 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.506581 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.506610 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.506647 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.506672 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.610157 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.610241 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.610260 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.610290 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.610308 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.714246 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.714300 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.714310 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.714328 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.714338 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.823245 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.823339 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.823363 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.823399 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.823418 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.925380 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:33 crc kubenswrapper[4991]: E0929 09:38:33.925629 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.930734 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.930774 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.930786 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.930806 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:33 crc kubenswrapper[4991]: I0929 09:38:33.930822 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:33Z","lastTransitionTime":"2025-09-29T09:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.033981 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.034092 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.034115 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.034142 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.034162 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.136905 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.137025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.137055 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.137086 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.137108 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.240508 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.240597 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.240619 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.240656 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.240679 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.344687 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.344790 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.344803 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.344853 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.344874 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.448137 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.448218 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.448252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.448286 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.448308 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.552366 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.552435 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.552455 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.552483 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.552501 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.656179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.656252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.656291 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.656325 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.656350 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.759062 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.759105 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.759113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.759128 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.759137 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.862608 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.862670 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.862679 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.862699 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.862710 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.925900 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.926081 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:34 crc kubenswrapper[4991]: E0929 09:38:34.926131 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.926222 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:34 crc kubenswrapper[4991]: E0929 09:38:34.926311 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:34 crc kubenswrapper[4991]: E0929 09:38:34.926412 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.950809 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:34Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.967632 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:34Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.976872 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.976922 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.977028 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.977050 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.977063 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:34Z","lastTransitionTime":"2025-09-29T09:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:34 crc kubenswrapper[4991]: I0929 09:38:34.991525 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:34Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.007773 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.019059 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.032533 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.047041 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.061222 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.081946 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.082025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.082046 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.082078 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.082098 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.083584 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.102261 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.115456 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.125740 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.138721 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.153091 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.166297 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.179239 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:35Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.184908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.184992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.185008 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.185035 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.185044 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.288131 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.288187 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.288199 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.288220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.288232 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.392599 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.392726 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.392745 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.392770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.392788 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.496508 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.496604 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.496647 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.496681 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.496705 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.600293 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.600367 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.600388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.600416 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.600438 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.649362 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:35 crc kubenswrapper[4991]: E0929 09:38:35.649631 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:35 crc kubenswrapper[4991]: E0929 09:38:35.649742 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:38:51.649712825 +0000 UTC m=+67.505640883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.704336 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.704419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.704441 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.704471 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.704496 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.807897 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.808006 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.808047 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.808066 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.808079 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.912206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.912329 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.912385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.912408 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.912426 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:35Z","lastTransitionTime":"2025-09-29T09:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:35 crc kubenswrapper[4991]: I0929 09:38:35.926076 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:35 crc kubenswrapper[4991]: E0929 09:38:35.926179 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.015708 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.015761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.015774 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.015791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.015801 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.118194 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.118244 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.118255 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.118270 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.118279 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.221163 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.221213 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.221229 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.221250 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.221263 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.323778 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.323825 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.323839 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.323860 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.323876 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.426275 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.426335 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.426347 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.426365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.426379 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.529812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.529890 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.529908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.529939 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.529989 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.632763 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.632806 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.632815 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.632832 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.632844 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.660483 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.660644 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:39:08.660621289 +0000 UTC m=+84.516549327 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.735565 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.735653 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.735684 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.735712 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.735740 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.761315 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.761376 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.761413 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.761440 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.761566 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.761670 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:39:08.761637502 +0000 UTC m=+84.617565580 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.761753 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.761884 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:39:08.761860427 +0000 UTC m=+84.617788475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762173 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762250 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762269 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762347 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:39:08.762324738 +0000 UTC m=+84.618252786 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762186 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762383 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762431 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.762478 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:39:08.762465291 +0000 UTC m=+84.618393329 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.839378 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.839469 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.839492 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.839524 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.839548 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.926011 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.926208 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.926242 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.926377 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.926438 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:36 crc kubenswrapper[4991]: E0929 09:38:36.926568 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.941768 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.941826 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.941841 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.941862 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:36 crc kubenswrapper[4991]: I0929 09:38:36.941875 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:36Z","lastTransitionTime":"2025-09-29T09:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.044187 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.044256 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.044272 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.044298 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.044311 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.147816 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.147867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.147879 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.147897 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.147910 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.251541 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.251597 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.251606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.251623 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.251636 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.354077 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.354122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.354131 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.354154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.354167 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.457547 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.457612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.457630 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.457656 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.457674 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.560700 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.560764 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.560783 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.560809 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.560827 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.664807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.664867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.664887 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.664912 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.664931 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.767791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.767880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.767913 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.767940 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.767995 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.870705 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.870757 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.870768 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.870788 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.870802 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.926055 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:37 crc kubenswrapper[4991]: E0929 09:38:37.926265 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.973584 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.973644 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.973657 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.973677 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:37 crc kubenswrapper[4991]: I0929 09:38:37.973688 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:37Z","lastTransitionTime":"2025-09-29T09:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.075944 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.075998 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.076009 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.076023 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.076033 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.173725 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.179257 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.179301 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.179316 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.179334 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.179347 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.188349 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.199876 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.217798 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.252471 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.272055 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.282252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.282300 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.282331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.282350 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.282363 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.293726 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.310906 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.325828 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.339015 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.355060 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.366992 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.380718 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.384655 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.384712 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.384725 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.384747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.384760 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.395237 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.411036 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.424598 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.436315 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.451614 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:38Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.487320 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.487360 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.487371 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.487388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.487400 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.590495 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.590548 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.590564 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.590583 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.590593 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.693304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.693357 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.693369 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.693386 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.693398 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.796420 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.796473 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.796484 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.796504 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.796514 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.898436 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.898496 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.898514 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.898538 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.898554 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:38Z","lastTransitionTime":"2025-09-29T09:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.925939 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.925984 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:38 crc kubenswrapper[4991]: E0929 09:38:38.926124 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:38 crc kubenswrapper[4991]: I0929 09:38:38.926146 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:38 crc kubenswrapper[4991]: E0929 09:38:38.926359 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:38 crc kubenswrapper[4991]: E0929 09:38:38.926480 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.001033 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.001082 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.001094 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.001112 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.001124 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.103605 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.103673 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.103695 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.103725 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.103747 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.183202 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.204277 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.209429 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.209501 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.209518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.209545 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.209562 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.222839 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.242111 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.271450 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.289159 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.312267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.312324 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.312338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.312366 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.312378 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.313975 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.340181 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.356884 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.371641 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.386329 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.399401 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.410196 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.414466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.414493 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.414502 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.414516 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.414528 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.421718 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.435000 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.447553 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.462387 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.480423 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:39Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.516844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.516906 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.516924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.516944 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.516986 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.619580 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.619998 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.620080 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.620155 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.620220 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.723216 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.723266 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.723276 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.723296 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.723306 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.826576 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.826636 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.826651 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.826672 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.826684 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.925943 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:39 crc kubenswrapper[4991]: E0929 09:38:39.926152 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.929318 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.929412 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.929428 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.929451 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:39 crc kubenswrapper[4991]: I0929 09:38:39.929465 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:39Z","lastTransitionTime":"2025-09-29T09:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.032992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.033049 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.033059 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.033080 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.033097 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.136222 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.136293 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.136312 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.136334 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.136351 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.239300 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.239352 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.239364 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.239385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.239397 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.341660 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.341696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.341706 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.341722 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.341734 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.444070 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.444430 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.444525 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.444669 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.444772 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.547129 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.547468 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.547568 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.547657 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.547743 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.650419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.650498 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.650509 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.650533 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.650550 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.754277 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.754334 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.754348 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.754365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.754381 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.857572 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.857610 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.857620 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.857634 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.857644 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.925922 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.926000 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.926027 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:40 crc kubenswrapper[4991]: E0929 09:38:40.926108 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:40 crc kubenswrapper[4991]: E0929 09:38:40.926203 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:40 crc kubenswrapper[4991]: E0929 09:38:40.926351 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.960575 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.960635 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.960653 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.960677 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:40 crc kubenswrapper[4991]: I0929 09:38:40.960692 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:40Z","lastTransitionTime":"2025-09-29T09:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.063996 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.064055 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.064065 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.064082 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.064094 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.166916 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.166971 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.166979 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.166994 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.167004 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.270156 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.270210 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.270221 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.270236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.270246 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.373491 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.373576 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.373597 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.373624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.373642 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.476148 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.476212 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.476222 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.476240 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.476251 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.491990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.492037 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.492048 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.492065 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.492079 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.511973 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:41Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.517224 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.517287 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.517319 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.517341 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.517355 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.530492 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:41Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.535380 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.535417 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.535429 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.535445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.535457 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.552506 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:41Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.557979 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.558061 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.558082 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.558104 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.558141 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.573922 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:41Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.580691 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.580799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.580862 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.580891 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.580911 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.600593 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:41Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.600779 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.603030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.603087 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.603098 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.603114 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.603142 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.706585 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.706641 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.706651 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.706672 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.706683 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.810324 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.810372 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.810388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.810407 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.810420 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.914247 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.914309 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.914327 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.914354 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.914373 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:41Z","lastTransitionTime":"2025-09-29T09:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:41 crc kubenswrapper[4991]: I0929 09:38:41.925909 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:41 crc kubenswrapper[4991]: E0929 09:38:41.926172 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.016680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.016720 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.016733 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.016752 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.016765 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.120062 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.120118 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.120133 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.120157 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.120172 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.225490 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.225856 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.225921 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.226005 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.226076 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.329475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.329520 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.329530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.329545 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.329555 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.432569 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.432618 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.432635 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.432659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.432677 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.535870 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.536316 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.536600 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.537053 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.537400 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.640688 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.640982 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.641053 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.641175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.641240 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.744509 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.744545 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.744556 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.744571 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.744580 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.847314 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.847399 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.847414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.847441 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.847456 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.925343 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:42 crc kubenswrapper[4991]: E0929 09:38:42.925785 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.925424 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:42 crc kubenswrapper[4991]: E0929 09:38:42.926095 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.925334 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:42 crc kubenswrapper[4991]: E0929 09:38:42.926336 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.949390 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.949444 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.949457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.949475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:42 crc kubenswrapper[4991]: I0929 09:38:42.949485 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:42Z","lastTransitionTime":"2025-09-29T09:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.052236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.052304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.052323 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.052352 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.052374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.155274 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.155330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.155343 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.155361 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.155374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.258598 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.258663 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.258671 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.258694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.258710 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.361409 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.361464 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.361475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.361502 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.361516 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.465051 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.465111 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.465123 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.465145 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.465158 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.567709 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.567750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.567759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.567776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.567786 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.670202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.670254 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.670264 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.670285 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.670297 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.773239 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.773288 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.773301 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.773319 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.773330 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.876827 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.876886 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.876896 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.876914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.876928 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.925629 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:43 crc kubenswrapper[4991]: E0929 09:38:43.925819 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.980335 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.980428 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.980445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.980473 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:43 crc kubenswrapper[4991]: I0929 09:38:43.980491 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:43Z","lastTransitionTime":"2025-09-29T09:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.084089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.084166 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.084179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.084200 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.084215 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.186825 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.186880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.186891 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.186909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.186919 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.289866 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.289943 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.289997 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.290028 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.290054 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.393018 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.393085 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.393099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.393122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.393136 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.495890 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.496019 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.496036 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.496057 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.496072 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.598536 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.598630 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.598650 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.598681 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.598708 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.703623 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.703673 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.703807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.703834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.703845 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.807465 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.807517 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.807535 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.807556 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.807569 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.911104 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.911777 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.911790 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.911812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.911825 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:44Z","lastTransitionTime":"2025-09-29T09:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.925565 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.925598 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.925625 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:44 crc kubenswrapper[4991]: E0929 09:38:44.925703 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:44 crc kubenswrapper[4991]: E0929 09:38:44.925832 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:44 crc kubenswrapper[4991]: E0929 09:38:44.926053 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.943357 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:44Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.958418 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:44Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.968157 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:44Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:44 crc kubenswrapper[4991]: I0929 09:38:44.989019 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:44Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.000422 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:44Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.014279 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.014330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.014345 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.014370 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.014391 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.015554 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.030446 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.043684 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.058413 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.072622 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.084832 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.099669 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.113194 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.117928 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.118025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.118044 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.118073 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.118092 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.129854 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.145329 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.160046 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.175236 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:45Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.221133 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.221181 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.221191 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.221208 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.221219 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.323890 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.323980 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.323993 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.324015 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.324030 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.426154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.426213 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.426225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.426247 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.426261 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.530318 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.530381 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.530391 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.530409 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.530421 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.633307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.633373 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.633388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.633414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.633429 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.736443 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.736496 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.736506 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.736526 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.736537 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.840420 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.840492 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.840503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.840529 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.840543 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.926184 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:45 crc kubenswrapper[4991]: E0929 09:38:45.926484 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.944115 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.944172 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.944189 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.944219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:45 crc kubenswrapper[4991]: I0929 09:38:45.944239 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:45Z","lastTransitionTime":"2025-09-29T09:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.048761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.048814 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.048823 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.048877 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.048891 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.152000 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.152771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.152888 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.153023 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.153159 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.256494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.256549 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.256563 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.256587 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.256602 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.360056 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.360101 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.360112 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.360129 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.360138 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.463108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.463595 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.463838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.464090 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.464300 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.566743 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.566798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.566808 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.566826 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.566836 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.669926 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.669983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.669994 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.670011 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.670021 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.773424 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.773481 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.773494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.773514 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.773532 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.877214 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.877277 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.877289 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.877311 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.877326 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.926053 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.926183 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.926232 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:46 crc kubenswrapper[4991]: E0929 09:38:46.926405 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:46 crc kubenswrapper[4991]: E0929 09:38:46.926594 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:46 crc kubenswrapper[4991]: E0929 09:38:46.926772 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.980112 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.980510 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.980632 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.980760 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:46 crc kubenswrapper[4991]: I0929 09:38:46.980869 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:46Z","lastTransitionTime":"2025-09-29T09:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.084852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.084924 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.084983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.085020 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.085063 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.188191 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.189818 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.189995 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.190171 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.190307 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.293324 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.293355 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.293362 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.293378 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.293388 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.396508 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.396612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.396637 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.396666 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.396690 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.499750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.499801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.499820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.499844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.499859 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.602030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.602077 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.602089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.602108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.602121 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.705269 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.705328 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.705341 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.705360 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.705374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.809838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.810206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.810308 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.810467 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.810549 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.914160 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.914194 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.914203 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.914220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.914233 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:47Z","lastTransitionTime":"2025-09-29T09:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.926141 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:47 crc kubenswrapper[4991]: E0929 09:38:47.926591 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:47 crc kubenswrapper[4991]: I0929 09:38:47.927023 4991 scope.go:117] "RemoveContainer" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" Sep 29 09:38:47 crc kubenswrapper[4991]: E0929 09:38:47.927265 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.022680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.022741 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.022759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.022784 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.022802 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.125686 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.125739 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.125755 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.125776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.125789 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.228139 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.228191 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.228208 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.228236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.228254 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.331217 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.331285 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.331304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.331331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.331349 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.434533 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.434608 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.434632 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.434683 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.434712 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.538027 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.538081 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.538091 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.538111 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.538126 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.640533 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.640587 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.640600 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.640622 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.640635 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.751331 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.751385 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.751398 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.751419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.751432 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.854259 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.854308 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.854318 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.854334 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.854347 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.925920 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.926069 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:48 crc kubenswrapper[4991]: E0929 09:38:48.926172 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.926236 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:48 crc kubenswrapper[4991]: E0929 09:38:48.926352 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:48 crc kubenswrapper[4991]: E0929 09:38:48.926411 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.957305 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.957366 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.957383 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.957406 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:48 crc kubenswrapper[4991]: I0929 09:38:48.957423 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:48Z","lastTransitionTime":"2025-09-29T09:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.060540 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.060601 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.060620 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.060646 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.060664 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.164226 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.164293 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.164307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.164330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.164346 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.267048 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.267827 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.268012 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.268152 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.268246 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.372902 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.372972 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.372983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.373003 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.373015 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.475774 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.475836 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.475847 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.475865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.475880 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.578548 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.578624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.578642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.578669 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.578686 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.682398 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.682465 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.682477 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.682503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.682518 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.784843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.784899 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.784920 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.784946 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.784991 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.887310 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.887381 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.887407 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.887439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.887466 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.926197 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:49 crc kubenswrapper[4991]: E0929 09:38:49.926420 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.989682 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.989749 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.989772 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.989805 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:49 crc kubenswrapper[4991]: I0929 09:38:49.989832 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:49Z","lastTransitionTime":"2025-09-29T09:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.092312 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.092359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.092372 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.092393 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.092406 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.195434 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.195527 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.195547 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.195574 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.195594 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.298229 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.298319 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.298338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.298365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.298379 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.401380 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.401449 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.401475 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.401507 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.401529 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.504391 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.504451 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.504466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.504485 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.504500 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.607134 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.607206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.607225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.607249 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.607264 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.709798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.709849 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.709859 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.709877 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.709889 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.813196 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.813242 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.813253 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.813269 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.813280 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.916314 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.916359 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.916372 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.916390 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.916403 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:50Z","lastTransitionTime":"2025-09-29T09:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.925619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.925619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:50 crc kubenswrapper[4991]: E0929 09:38:50.925809 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:50 crc kubenswrapper[4991]: I0929 09:38:50.925619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:50 crc kubenswrapper[4991]: E0929 09:38:50.925996 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:50 crc kubenswrapper[4991]: E0929 09:38:50.926297 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.025693 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.025807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.025825 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.025848 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.025863 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.128632 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.128694 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.128711 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.128733 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.128755 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.232113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.232165 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.232175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.232195 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.232208 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.334874 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.334956 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.334969 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.334991 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.335005 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.436875 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.436898 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.436909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.436921 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.436930 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.539893 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.539935 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.539966 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.539985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.539998 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.642928 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.643362 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.643463 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.643609 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.643698 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.731797 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.731839 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.731852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.731870 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.731885 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.737113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.737346 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.737412 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:39:23.737392703 +0000 UTC m=+99.593320731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.745892 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:51Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.750479 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.750530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.750542 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.750561 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.750574 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.763271 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:51Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.767823 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.767978 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.768067 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.768157 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.768255 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.780918 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:51Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.785636 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.785746 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.785771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.785806 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.785830 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.803072 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:51Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.807624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.807708 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.807722 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.807747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.807762 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.826437 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:51Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.826569 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.828581 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.828614 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.828624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.828644 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.828657 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.926042 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:51 crc kubenswrapper[4991]: E0929 09:38:51.926239 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.930753 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.930824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.930842 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.930872 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:51 crc kubenswrapper[4991]: I0929 09:38:51.930892 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:51Z","lastTransitionTime":"2025-09-29T09:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.034202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.034245 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.034257 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.034273 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.034285 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.137602 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.137652 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.137668 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.137687 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.137700 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.241670 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.241718 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.241731 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.241755 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.241771 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.344047 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.344126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.344187 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.344216 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.344231 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.446843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.446896 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.446911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.446934 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.446973 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.549991 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.550058 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.550070 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.550089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.550101 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.658336 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.659014 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.659034 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.659058 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.659072 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.761807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.761845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.761854 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.761869 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.761879 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.864541 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.864584 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.864596 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.864612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.864626 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.926287 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.926557 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.926602 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:52 crc kubenswrapper[4991]: E0929 09:38:52.926728 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:52 crc kubenswrapper[4991]: E0929 09:38:52.927032 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:52 crc kubenswrapper[4991]: E0929 09:38:52.927129 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.941691 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.967453 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.967503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.967514 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.967532 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:52 crc kubenswrapper[4991]: I0929 09:38:52.967543 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:52Z","lastTransitionTime":"2025-09-29T09:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.070115 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.070170 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.070184 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.070206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.070268 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.172890 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.173198 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.173265 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.173330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.173401 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.276160 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.276218 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.276232 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.276277 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.276291 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.379250 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.379305 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.379317 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.379338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.379352 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.420717 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/0.log" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.421144 4991 generic.go:334] "Generic (PLEG): container finished" podID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" containerID="17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c" exitCode=1 Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.421283 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerDied","Data":"17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.422384 4991 scope.go:117] "RemoveContainer" containerID="17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.435613 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.451761 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.464629 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.478731 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.482294 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.482339 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.482353 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.482373 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.482386 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.491301 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.507631 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.521771 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.534652 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.548347 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.585445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.585514 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.585528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.585549 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.585564 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.591995 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.627992 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.645015 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.669217 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.686903 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.688588 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.689825 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.689842 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.689860 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.689872 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.704785 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.720118 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.734114 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.750383 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:53Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.793044 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.793122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.793136 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.793157 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.793171 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.896284 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.896336 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.896349 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.896365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.896374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.926121 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:53 crc kubenswrapper[4991]: E0929 09:38:53.926298 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.999063 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.999110 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.999120 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.999136 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:53 crc kubenswrapper[4991]: I0929 09:38:53.999146 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:53Z","lastTransitionTime":"2025-09-29T09:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.102068 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.102133 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.102154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.102179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.102196 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.204843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.204977 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.204990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.205012 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.205030 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.311691 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.311781 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.311798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.311821 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.311846 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.415682 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.415730 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.415742 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.415758 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.415768 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.427031 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/0.log" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.427110 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerStarted","Data":"93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.446140 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.462901 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.477895 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.495694 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.509368 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.518690 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.518747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.518763 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.518788 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.518800 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.523741 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.536566 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.558930 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.573300 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.588739 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.600763 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.613434 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.621706 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.621747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.621755 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.621771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.621787 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.628792 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.641732 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.653619 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.665046 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.680085 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.693623 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.724981 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.725013 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.725022 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.725055 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.725068 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.828127 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.828165 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.828174 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.828189 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.828199 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.925549 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.925565 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.925605 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:54 crc kubenswrapper[4991]: E0929 09:38:54.925836 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:54 crc kubenswrapper[4991]: E0929 09:38:54.925988 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:54 crc kubenswrapper[4991]: E0929 09:38:54.926077 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.930363 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.930420 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.930432 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.930453 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.930466 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:54Z","lastTransitionTime":"2025-09-29T09:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.942464 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.956517 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.970053 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:54 crc kubenswrapper[4991]: I0929 09:38:54.988824 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.001422 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:54Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.015963 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032029 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032382 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032107 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032450 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032675 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.032702 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.045752 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.061483 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.077271 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.092705 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.105258 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.118620 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.130491 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.135699 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.135730 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.135741 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.135759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.135773 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.146413 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.167744 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.183190 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.200696 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:38:55Z is after 2025-08-24T17:21:41Z" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.238751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.238799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.238810 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.238824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.238833 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.341139 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.341192 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.341205 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.341225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.341237 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.443792 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.443834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.443845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.443866 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.443877 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.547564 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.547604 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.547612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.547629 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.547638 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.650267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.650330 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.650344 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.650365 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.650378 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.752799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.752842 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.752851 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.752867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.752876 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.854909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.854993 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.855010 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.855032 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.855046 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.925770 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:55 crc kubenswrapper[4991]: E0929 09:38:55.925971 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.958480 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.958532 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.958542 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.958563 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:55 crc kubenswrapper[4991]: I0929 09:38:55.958577 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:55Z","lastTransitionTime":"2025-09-29T09:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.061733 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.061783 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.061795 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.061812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.061825 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.164972 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.165025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.165041 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.165066 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.165084 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.267986 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.268039 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.268057 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.268080 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.268095 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.370711 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.371099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.371183 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.371281 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.371364 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.475751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.475791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.475801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.475817 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.475827 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.577767 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.577810 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.577826 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.577845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.577856 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.680033 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.680090 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.680099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.680118 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.680129 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.782768 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.783122 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.783200 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.783267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.783323 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.886836 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.886890 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.886900 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.886937 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.886965 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.925700 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.925755 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.925854 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:56 crc kubenswrapper[4991]: E0929 09:38:56.925890 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:56 crc kubenswrapper[4991]: E0929 09:38:56.926157 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:56 crc kubenswrapper[4991]: E0929 09:38:56.926249 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.989388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.989424 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.989433 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.989447 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:56 crc kubenswrapper[4991]: I0929 09:38:56.989458 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:56Z","lastTransitionTime":"2025-09-29T09:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.092783 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.092824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.092834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.092852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.092862 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.196068 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.196158 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.196176 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.196207 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.196226 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.299770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.299819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.299832 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.299852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.299865 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.403344 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.403713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.403820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.403914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.404012 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.506804 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.506850 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.506861 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.506914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.506927 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.610009 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.610140 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.610157 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.610174 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.610184 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.713395 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.713449 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.713463 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.713482 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.713495 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.816806 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.816864 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.816875 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.816896 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.816909 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.921791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.921853 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.921878 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.921908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.921930 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:57Z","lastTransitionTime":"2025-09-29T09:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:57 crc kubenswrapper[4991]: I0929 09:38:57.925391 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:57 crc kubenswrapper[4991]: E0929 09:38:57.925532 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.024590 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.024644 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.024657 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.024680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.024696 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.127255 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.127325 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.127338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.127356 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.127369 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.229843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.229897 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.229912 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.229931 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.229941 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.332212 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.332259 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.332268 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.332285 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.332295 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.435135 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.435187 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.435200 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.435219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.435239 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.537869 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.537918 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.537930 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.537985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.537998 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.641039 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.641202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.641217 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.641236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.641248 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.744281 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.744324 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.744335 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.744353 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.744367 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.846983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.847040 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.847052 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.847068 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.847080 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.925850 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.925948 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.926087 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:38:58 crc kubenswrapper[4991]: E0929 09:38:58.926324 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:38:58 crc kubenswrapper[4991]: E0929 09:38:58.926452 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:38:58 crc kubenswrapper[4991]: E0929 09:38:58.926561 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.949601 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.949649 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.949662 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.949680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:58 crc kubenswrapper[4991]: I0929 09:38:58.949691 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:58Z","lastTransitionTime":"2025-09-29T09:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.051696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.051729 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.051737 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.051752 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.051760 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.153984 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.154032 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.154046 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.154065 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.154078 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.256775 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.256825 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.256836 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.256856 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.256868 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.359801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.359844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.359855 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.359871 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.359881 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.461885 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.461934 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.461949 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.461997 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.462011 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.566072 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.566136 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.566149 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.566642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.566660 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.669076 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.669128 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.669154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.669177 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.669193 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.772068 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.772099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.772109 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.772146 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.772161 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.875334 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.875378 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.875388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.875404 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.875414 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.925917 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:38:59 crc kubenswrapper[4991]: E0929 09:38:59.926070 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.978819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.978902 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.978921 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.979368 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:38:59 crc kubenswrapper[4991]: I0929 09:38:59.979421 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:38:59Z","lastTransitionTime":"2025-09-29T09:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.083550 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.083611 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.083626 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.083647 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.083661 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.186728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.186822 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.186845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.186884 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.186908 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.295563 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.295615 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.295628 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.295649 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.295661 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.398534 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.398622 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.398645 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.398677 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.398702 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.501839 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.501904 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.501926 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.501969 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.501985 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.606031 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.606102 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.606120 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.606140 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.606173 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.709751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.709820 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.709838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.709865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.709883 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.813856 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.813988 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.814004 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.814027 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.814037 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.916720 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.916761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.916770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.916785 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.916798 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:00Z","lastTransitionTime":"2025-09-29T09:39:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.926134 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.926185 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:00 crc kubenswrapper[4991]: E0929 09:39:00.926257 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:00 crc kubenswrapper[4991]: I0929 09:39:00.926405 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:00 crc kubenswrapper[4991]: E0929 09:39:00.926471 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:00 crc kubenswrapper[4991]: E0929 09:39:00.926721 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.019571 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.019617 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.019631 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.019649 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.019661 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.123126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.123185 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.123204 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.123229 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.123246 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.226261 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.226338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.226351 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.226367 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.226378 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.329914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.329978 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.329990 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.330009 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.330020 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.435445 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.435528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.435564 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.435593 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.435621 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.538027 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.538078 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.538087 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.538100 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.538128 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.641640 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.641716 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.641734 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.641760 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.641782 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.745307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.745394 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.745416 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.745446 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.745474 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.848704 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.848747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.848756 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.848772 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.848785 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.900176 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.900231 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.900243 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.900263 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.900275 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: E0929 09:39:01.915538 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:01Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.919587 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.919637 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.919650 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.919669 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.919684 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.926179 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:01 crc kubenswrapper[4991]: E0929 09:39:01.926346 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.927994 4991 scope.go:117] "RemoveContainer" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" Sep 29 09:39:01 crc kubenswrapper[4991]: E0929 09:39:01.934648 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:01Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.945192 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.945231 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.945243 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.945258 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.945269 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: E0929 09:39:01.958744 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:01Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.965678 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.965753 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.965775 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.965803 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.965821 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:01 crc kubenswrapper[4991]: E0929 09:39:01.982750 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:01Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.987190 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.987235 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.987248 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.987265 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:01 crc kubenswrapper[4991]: I0929 09:39:01.987276 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:01Z","lastTransitionTime":"2025-09-29T09:39:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: E0929 09:39:02.000335 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:01Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: E0929 09:39:02.000506 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.002183 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.002252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.002271 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.002297 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.002315 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.105599 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.105657 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.105673 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.105696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.105708 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.209011 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.209050 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.209058 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.209073 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.209082 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.313662 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.313702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.313712 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.313726 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.313737 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.416538 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.416584 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.416593 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.416609 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.416619 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.454342 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/2.log" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.456913 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.457829 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.473674 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.489454 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.504383 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.519473 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.519509 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.519518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.519535 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.519546 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.520632 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.533266 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.559730 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.572888 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.585228 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.597321 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.611767 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.622438 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.622491 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.622500 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.622519 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.622530 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.624670 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.636923 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.650540 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.708875 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.724021 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.725818 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.725922 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.725936 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.725982 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.725999 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.738557 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.752468 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.766881 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:02Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.829639 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.829686 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.829696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.829717 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.829727 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.926172 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.926281 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:02 crc kubenswrapper[4991]: E0929 09:39:02.926358 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.926299 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:02 crc kubenswrapper[4991]: E0929 09:39:02.926489 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:02 crc kubenswrapper[4991]: E0929 09:39:02.926574 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.931788 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.931831 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.931843 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.931859 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:02 crc kubenswrapper[4991]: I0929 09:39:02.931871 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:02Z","lastTransitionTime":"2025-09-29T09:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.035540 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.035607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.035628 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.035659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.035680 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.139321 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.139407 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.139430 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.139457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.139481 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.242537 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.242596 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.242611 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.242635 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.242648 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.345926 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.346047 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.346067 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.346092 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.346111 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.450025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.450126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.450141 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.450172 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.450191 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.464037 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/3.log" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.465323 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/2.log" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.470309 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" exitCode=1 Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.470359 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.470407 4991 scope.go:117] "RemoveContainer" containerID="55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.471871 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:39:03 crc kubenswrapper[4991]: E0929 09:39:03.472311 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.490692 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.508143 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.528526 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.545623 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.553098 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.553173 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.553194 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.553256 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.553271 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.560663 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.574093 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.588237 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.601156 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.625133 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c3edef24744c0bc46b98123641a7a8c3f8bc5fbccabd994b86326fd3cc423f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:31Z\\\",\\\"message\\\":\\\"ormers/externalversions/factory.go:140\\\\nI0929 09:38:31.068141 6638 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0929 09:38:31.068617 6638 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0929 09:38:31.068641 6638 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0929 09:38:31.068667 6638 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0929 09:38:31.068674 6638 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0929 09:38:31.068690 6638 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0929 09:38:31.068700 6638 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:38:31.068729 6638 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0929 09:38:31.068739 6638 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:38:31.068988 6638 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:38:31.069008 6638 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:38:31.069047 6638 factory.go:656] Stopping watch factory\\\\nI0929 09:38:31.069061 6638 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:38:31.069075 6638 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:39:02Z\\\",\\\"message\\\":\\\"\\\\nI0929 09:39:02.820637 7012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.820640 7012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0929 09:39:02.820739 7012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0929 09:39:02.820782 7012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:39:02.820796 7012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:39:02.820873 7012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.821327 7012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:39:02.821352 7012 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:39:02.821360 7012 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:39:02.821384 7012 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:39:02.821390 7012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:39:02.821431 7012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:39:02.821444 7012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:39:02.821433 7012 factory.go:656] Stopping watch factory\\\\nI0929 09:39:02.821455 7012 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.640639 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656134 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656413 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656435 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656446 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656462 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.656474 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.666725 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.677333 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.691609 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.708172 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.720221 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.731671 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.742612 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:03Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.758503 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.758532 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.758544 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.758559 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.758570 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.861333 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.861402 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.861421 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.861448 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.861469 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.925774 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:03 crc kubenswrapper[4991]: E0929 09:39:03.925929 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.963587 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.963636 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.963649 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.963674 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:03 crc kubenswrapper[4991]: I0929 09:39:03.963686 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:03Z","lastTransitionTime":"2025-09-29T09:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.066778 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.066824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.066833 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.066849 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.066859 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.169449 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.169490 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.169505 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.169522 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.169536 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.271404 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.271443 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.271454 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.271470 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.271480 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.374997 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.375078 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.375098 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.375120 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.375135 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476221 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/3.log" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476897 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476923 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476935 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476978 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.476992 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.479794 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:39:04 crc kubenswrapper[4991]: E0929 09:39:04.479930 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.497770 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.514661 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.527691 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.545491 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.560121 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.574849 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.579355 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.579414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.579428 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.579448 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.579461 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.587428 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.606537 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:39:02Z\\\",\\\"message\\\":\\\"\\\\nI0929 09:39:02.820637 7012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.820640 7012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0929 09:39:02.820739 7012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0929 09:39:02.820782 7012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:39:02.820796 7012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:39:02.820873 7012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.821327 7012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:39:02.821352 7012 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:39:02.821360 7012 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:39:02.821384 7012 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:39:02.821390 7012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:39:02.821431 7012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:39:02.821444 7012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:39:02.821433 7012 factory.go:656] Stopping watch factory\\\\nI0929 09:39:02.821455 7012 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:39:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.619409 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.634919 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.647988 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.662548 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.677055 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.681280 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.681322 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.681333 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.681351 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.681361 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.692352 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.706027 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.718349 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.731278 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.741539 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.784066 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.784147 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.784159 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.784179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.784192 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.887865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.887914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.887927 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.887948 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.887983 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.925313 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:04 crc kubenswrapper[4991]: E0929 09:39:04.925474 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.925527 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.925619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:04 crc kubenswrapper[4991]: E0929 09:39:04.925694 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:04 crc kubenswrapper[4991]: E0929 09:39:04.926297 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.939649 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.956864 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.970103 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.990809 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.990863 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.990872 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.990891 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:04 crc kubenswrapper[4991]: I0929 09:39:04.990901 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:04Z","lastTransitionTime":"2025-09-29T09:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.000520 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:39:02Z\\\",\\\"message\\\":\\\"\\\\nI0929 09:39:02.820637 7012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.820640 7012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0929 09:39:02.820739 7012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0929 09:39:02.820782 7012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:39:02.820796 7012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:39:02.820873 7012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.821327 7012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:39:02.821352 7012 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:39:02.821360 7012 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:39:02.821384 7012 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:39:02.821390 7012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:39:02.821431 7012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:39:02.821444 7012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:39:02.821433 7012 factory.go:656] Stopping watch factory\\\\nI0929 09:39:02.821455 7012 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:39:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:04Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.015580 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.031376 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.045764 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.060363 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.072520 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.084078 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.093204 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.093247 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.093260 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.093282 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.093297 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.101711 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.113735 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.129270 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.145755 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.160166 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.177286 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.196777 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.196823 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.196834 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.196850 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.196893 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.197816 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.212314 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:05Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.300108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.300182 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.300194 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.300214 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.300226 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.402771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.403159 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.403238 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.403305 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.403368 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.505943 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.506055 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.506072 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.506099 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.506117 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.608426 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.608669 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.608736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.608800 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.608857 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.711654 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.711697 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.711706 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.711725 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.711738 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.814488 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.814547 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.814560 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.814579 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.814592 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.916884 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.916932 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.916966 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.916985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.916998 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:05Z","lastTransitionTime":"2025-09-29T09:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:05 crc kubenswrapper[4991]: I0929 09:39:05.926095 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:05 crc kubenswrapper[4991]: E0929 09:39:05.926226 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.020083 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.020202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.020217 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.020234 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.020248 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.123216 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.123272 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.123287 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.123307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.123321 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.226740 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.226802 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.226822 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.226844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.226859 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.329419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.329467 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.329483 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.329540 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.329559 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.432770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.432812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.432824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.432840 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.432852 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.536029 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.536074 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.536086 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.536106 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.536118 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.638903 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.638972 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.638982 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.639002 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.639014 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.742171 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.742902 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.742930 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.742983 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.743007 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.846165 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.846221 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.846240 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.846267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.846287 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.926360 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.926518 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:06 crc kubenswrapper[4991]: E0929 09:39:06.926574 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.926598 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:06 crc kubenswrapper[4991]: E0929 09:39:06.926734 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:06 crc kubenswrapper[4991]: E0929 09:39:06.926939 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.948914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.949030 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.949052 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.949081 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:06 crc kubenswrapper[4991]: I0929 09:39:06.949100 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:06Z","lastTransitionTime":"2025-09-29T09:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.052153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.052211 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.052225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.052244 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.052256 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.161642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.161701 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.161713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.161732 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.161744 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.264278 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.264339 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.264408 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.264439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.264464 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.368225 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.368313 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.368332 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.368367 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.368403 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.471613 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.471658 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.471668 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.471686 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.471699 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.575740 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.575812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.575832 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.575860 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.575882 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.679063 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.679113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.679142 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.679160 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.679174 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.782588 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.782700 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.782733 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.782770 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.782794 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.886673 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.886736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.886750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.886775 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.886790 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.926137 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:07 crc kubenswrapper[4991]: E0929 09:39:07.926610 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.989649 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.989702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.989720 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.989744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:07 crc kubenswrapper[4991]: I0929 09:39:07.989763 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:07Z","lastTransitionTime":"2025-09-29T09:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.093009 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.093095 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.093117 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.093148 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.093170 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.196032 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.196117 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.196131 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.196150 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.196164 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.299291 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.299348 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.299361 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.299381 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.299393 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.401656 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.401709 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.401726 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.401751 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.401768 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.504642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.504693 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.504702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.504721 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.504730 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.608167 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.608238 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.608250 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.608267 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.608281 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.711220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.711329 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.711348 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.711373 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.711396 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.741912 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.742091 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.742065328 +0000 UTC m=+148.597993356 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.813552 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.813604 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.813615 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.813629 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.813642 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.843383 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.843436 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.843474 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.843497 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843620 4991 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843692 4991 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843710 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843742 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843756 4991 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843779 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.843753608 +0000 UTC m=+148.699681676 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843627 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843806 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.843790739 +0000 UTC m=+148.699718777 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843835 4991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843864 4991 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.843937 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.843911653 +0000 UTC m=+148.699839711 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.844058 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.844034346 +0000 UTC m=+148.699962414 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.917470 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.917589 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.917612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.917640 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.917668 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:08Z","lastTransitionTime":"2025-09-29T09:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.926216 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.926272 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:08 crc kubenswrapper[4991]: I0929 09:39:08.926235 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.926370 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.926463 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:08 crc kubenswrapper[4991]: E0929 09:39:08.926639 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.021097 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.021162 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.021188 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.021222 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.021246 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.124809 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.124863 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.124874 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.124893 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.124905 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.227652 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.227722 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.227740 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.227766 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.227785 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.331153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.331220 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.331234 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.331259 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.331273 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.435102 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.435177 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.435196 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.435408 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.435449 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.538909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.539003 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.539025 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.539237 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.539257 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.642592 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.642640 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.642648 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.642665 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.642675 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.745706 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.745764 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.745776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.745796 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.745807 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.848678 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.848731 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.848741 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.848762 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.848774 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.925586 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:09 crc kubenswrapper[4991]: E0929 09:39:09.925777 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.952998 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.953045 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.953091 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.953110 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:09 crc kubenswrapper[4991]: I0929 09:39:09.953125 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:09Z","lastTransitionTime":"2025-09-29T09:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.055941 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.056045 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.056058 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.056075 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.056088 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.158272 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.158352 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.158376 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.158401 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.158419 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.261098 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.261144 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.261156 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.261174 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.261186 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.365058 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.365441 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.365457 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.365481 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.365499 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.468581 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.468641 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.468663 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.468684 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.468698 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.572543 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.572600 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.572613 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.572633 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.572651 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.676219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.676286 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.676308 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.676338 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.676362 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.779865 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.780021 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.780051 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.780091 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.780119 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.883436 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.883497 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.883518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.883540 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.883557 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.925595 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.925682 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:10 crc kubenswrapper[4991]: E0929 09:39:10.926063 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.926294 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:10 crc kubenswrapper[4991]: E0929 09:39:10.926282 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:10 crc kubenswrapper[4991]: E0929 09:39:10.926371 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.988324 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.988398 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.988421 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.988454 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:10 crc kubenswrapper[4991]: I0929 09:39:10.988476 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:10Z","lastTransitionTime":"2025-09-29T09:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.092451 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.092494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.092502 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.092518 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.092528 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.194752 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.194798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.194811 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.194828 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.194839 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.297158 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.297246 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.297260 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.297280 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.297292 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.400335 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.400408 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.400419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.400437 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.400448 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.503113 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.503158 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.503170 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.503188 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.503201 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.606050 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.606108 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.606127 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.606153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.606172 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.709001 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.709054 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.709068 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.709089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.709103 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.811434 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.811498 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.811510 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.811528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.811540 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.913807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.913867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.913880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.913904 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.913922 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:11Z","lastTransitionTime":"2025-09-29T09:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:11 crc kubenswrapper[4991]: I0929 09:39:11.925308 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:11 crc kubenswrapper[4991]: E0929 09:39:11.925472 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.016846 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.016892 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.016905 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.016922 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.016934 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.073519 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.073603 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.073630 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.073660 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.073684 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.096049 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.100854 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.100899 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.100915 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.100936 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.100982 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.117083 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.121690 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.121729 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.121738 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.121756 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.121766 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.137420 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.141647 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.141688 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.141699 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.141716 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.141727 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.154137 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.158530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.158578 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.158589 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.158607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.158618 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.172076 4991 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-29T09:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d34e016c-ced3-4c1a-ac72-75b59b35ea37\\\",\\\"systemUUID\\\":\\\"dd0721c8-e1db-4918-837b-38acccbdf1e0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:12Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.172186 4991 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.173744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.173794 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.173808 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.173827 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.173841 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.276796 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.276868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.276886 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.276914 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.276933 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.380654 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.380705 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.380723 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.380748 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.380763 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.484063 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.484130 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.484168 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.484206 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.484229 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.588696 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.588775 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.588799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.588829 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.588853 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.691791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.691864 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.691884 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.691911 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.691929 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.795173 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.795228 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.795246 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.795273 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.795291 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.898450 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.898545 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.898573 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.898607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.898632 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:12Z","lastTransitionTime":"2025-09-29T09:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.925627 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.925672 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:12 crc kubenswrapper[4991]: I0929 09:39:12.925638 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.925782 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.925926 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:12 crc kubenswrapper[4991]: E0929 09:39:12.926153 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.001393 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.001460 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.001480 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.001506 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.001523 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.104712 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.104749 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.104766 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.104784 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.104800 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.207607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.207811 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.207832 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.207854 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.207869 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.311342 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.311388 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.311398 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.311416 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.311428 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.413799 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.413850 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.413868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.413886 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.413898 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.516121 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.516175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.516183 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.516197 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.516222 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.618801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.618833 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.618842 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.618857 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.618866 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.721708 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.721771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.721810 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.721828 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.721840 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.824085 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.824126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.824139 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.824155 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.824167 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.925496 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:13 crc kubenswrapper[4991]: E0929 09:39:13.925684 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.926759 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.926813 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.926827 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.926845 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:13 crc kubenswrapper[4991]: I0929 09:39:13.926856 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:13Z","lastTransitionTime":"2025-09-29T09:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.029659 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.029717 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.029730 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.029750 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.029763 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.133050 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.133135 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.133154 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.133180 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.133200 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.235721 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.235771 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.235782 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.235798 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.235809 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.338873 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.338922 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.338935 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.338974 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.338987 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.442055 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.442112 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.442124 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.442144 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.442155 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.544708 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.544752 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.544761 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.544779 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.544791 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.647682 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.647734 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.647744 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.647766 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.647777 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.750252 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.750307 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.750323 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.750344 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.750355 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.852376 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.852415 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.852425 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.852441 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.852451 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.925567 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.925647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.925576 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:14 crc kubenswrapper[4991]: E0929 09:39:14.925793 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:14 crc kubenswrapper[4991]: E0929 09:39:14.925721 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:14 crc kubenswrapper[4991]: E0929 09:39:14.925868 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.939821 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"390b2cec-f588-4b09-8334-5d1610f862db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f7ed3f765c05f4e88314272343cd7bdbc843ede405774e3de41b3e25b8ea554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81ea3dda8bdd5377655036c6c9b4b256f0e437e25138800b431567814bbb9e16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://520f08cfb52db2f7db6911f649d84ae740aef592e18b7d54dd40882c702c386d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7dc287a2d0423b7ecc7e79a74931504604984d0fe25bef40c3e618c892ead53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f246532f4ad4b9b5a094e7161f2b4460336143119ed3de2058feed5fd6fcf684\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW0929 09:38:04.740698 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0929 09:38:04.740828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0929 09:38:04.745521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2332092656/tls.crt::/tmp/serving-cert-2332092656/tls.key\\\\\\\"\\\\nI0929 09:38:05.221580 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0929 09:38:05.225716 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0929 09:38:05.225743 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0929 09:38:05.225774 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0929 09:38:05.225781 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0929 09:38:05.237419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0929 09:38:05.237446 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0929 09:38:05.237460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0929 09:38:05.237476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0929 09:38:05.237480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0929 09:38:05.237484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0929 09:38:05.237488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0929 09:38:05.240442 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84dcc432a7c252e2d7d7bced256e5262f68c8731cff65d7f9b539872978ec9d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8ed4e4e3791ae4770bc3c1c53aafbdea84a23a12684ce952d58fc3d11cdad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.951456 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8591ac79-5204-4f3b-8bbd-528ca57ae690\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://621f6e8fc9bc063c2dd897dd7954c06cc845ef6225888d4f8ddbd90dcd6fd26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a32dd66f00d9c920639ae525c992ead968eb9207b8c2ae15c278c47502c013ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c339bea97b06084255590be9dbeed6f6240dfe8c840a2ad344aa1340845faecb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4b3f6180dc21fb50e3f47311d11050ba93f0442e2a282e2442784cf0bdcbd00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.954885 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.954907 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.954916 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.954930 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.954939 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:14Z","lastTransitionTime":"2025-09-29T09:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.962940 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ab495a-4226-431c-b5f6-5107ba0ae2cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c904d6235314289c0468f23e9e99098bf3318797b1a1ff934e481019343a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7356b77ec51f4abe19914e817796ea42fc29176d4b9ba9abd68aa7287f53578b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df4f33084636102a5acdcfbc239fedbbb1ac96d58b5b4f3bd4dd74ebad6e3414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efcea6fcd65975cf119c29c5372c5d7be7136856a66b51c9d0696c7ac3b26d1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.974755 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88ab660a-6f01-4538-946f-38cdadd0b64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04e2ea7a2cfb4abf8fb3fddae2689e8ef7ac371c36c97b7308b1d7dbf8074829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgbqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:14 crc kubenswrapper[4991]: I0929 09:39:14.988477 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mm67g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:38:53Z\\\",\\\"message\\\":\\\"2025-09-29T09:38:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7\\\\n2025-09-29T09:38:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30aa0ad5-1f4f-4c88-8015-e08f481a11c7 to /host/opt/cni/bin/\\\\n2025-09-29T09:38:08Z [verbose] multus-daemon started\\\\n2025-09-29T09:38:08Z [verbose] Readiness Indicator file check\\\\n2025-09-29T09:38:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxlzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mm67g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.002342 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-td7ll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"facb5d72-dda5-4cd9-8af8-168994431934\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c842453fbe2519aaf389c8f8eb50ec84dbaec29c48bd8cceff9ad525f15a17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5n2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-td7ll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:14Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.014409 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67ee38c4-dc1a-4fb0-ac5b-dab1e76b0731\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47ef368af019eb63f3b78fa8ac0205e0051494a013e5bd67e67f1c726d009d7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a1fc13ff2c90eccfa2771cae750083e1785c63404c15e6cfef790e068315064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6lv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-fffvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.027352 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f847f2e6-9666-468d-9b76-b4ca63c50a60\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:37:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0bcc99ca7211df2cf767f8b7236698683d7ffd3eb5b813a1aefb3051f892e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a610727eb6690419f27ca63ee8e956e15bf4aa03fe80944201e5e5f1071ceb56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:37:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:37:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:37:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.042869 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.055918 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a46777b5e52d9e2fe8199d0ee5555035eeca4017ae81cfc1bdeb58fdba2fd670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd3ef7d8a7efc826c6aad53958e818b867c7d74394ce31bfc0e95f373d89087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.057530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.057580 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.057593 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.057612 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.057625 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.069540 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mwncr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b1e80be-d74b-4948-8121-f1ee76bf415c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd29ee795a754a029cf36f7c4423a845fef45a6b578770586d6d1c6bf52a7311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ff0b27b8ed37dbcd0d7d8e22d1337f77504bbe036915aae635378f457408fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d474f68875db7781b8a1bfd744bd6ca643ec2e6f4caf8d2b66aad3408d89ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://389bc08118cfc48f66a5e2aca481ddae32d23f215e4b7386f4536d491079fcb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df7fe77baf75f4f8c5e0c46785c14637278b61e4f2b8496ba414a1ba1d1e2400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1687d34f112387e6ea411f5868dd655b3b3527510537543ef78881ffac2b2d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1557f6a9370063a7196dae437e03a9ca406cfeb2e1320fcc78a83c37e813136f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcxqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mwncr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.080372 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.090740 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8983398912fee71d7a6a66443b3499ba687784d9d0802c27afb773b4d14cede1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.100066 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.108206 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hmmqv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1dc85d59-1f68-4489-bfd3-2cf9bd80d417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8324eb18b5d7c07e5cbfa0fb412fff01331a39688e291c676282a8f8c32223d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzvn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hmmqv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.125409 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14c96fef-6218-4f25-8f81-f7adc934b0d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-29T09:39:02Z\\\",\\\"message\\\":\\\"\\\\nI0929 09:39:02.820637 7012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.820640 7012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0929 09:39:02.820739 7012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0929 09:39:02.820782 7012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0929 09:39:02.820796 7012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0929 09:39:02.820873 7012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0929 09:39:02.821327 7012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0929 09:39:02.821352 7012 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0929 09:39:02.821360 7012 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0929 09:39:02.821384 7012 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0929 09:39:02.821390 7012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0929 09:39:02.821431 7012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0929 09:39:02.821444 7012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0929 09:39:02.821433 7012 factory.go:656] Stopping watch factory\\\\nI0929 09:39:02.821455 7012 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-29T09:39:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-29T09:38:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-78tp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h4hm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.135502 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48c35818-43cb-4bbf-bf05-37fd375d0d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzvhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-29T09:38:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7m5sp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.147770 4991 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-29T09:38:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1d66d8c72d57d135813cc988fc4801a99e8af7c4e8f28fe88ab085bf79b001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-29T09:38:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-29T09:39:15Z is after 2025-08-24T17:21:41Z" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.160244 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.160294 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.160304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.160320 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.160331 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.262807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.262867 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.262883 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.262906 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.262924 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.365329 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.365387 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.365403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.365424 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.365442 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.468663 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.468702 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.468713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.468727 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.468737 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.571616 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.571667 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.571680 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.571699 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.571711 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.674892 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.674965 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.674977 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.674998 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.675013 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.777186 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.777242 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.777258 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.777282 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.777300 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.880499 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.880544 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.880555 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.880571 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.880586 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.926011 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:15 crc kubenswrapper[4991]: E0929 09:39:15.926106 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.984864 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.984920 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.984942 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.985006 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:15 crc kubenswrapper[4991]: I0929 09:39:15.985029 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:15Z","lastTransitionTime":"2025-09-29T09:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.087880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.087925 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.087962 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.087987 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.088004 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.191347 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.191419 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.191439 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.191466 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.191538 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.294189 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.294294 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.294319 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.294382 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.294401 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.397153 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.397187 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.397196 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.397209 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.397221 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.499665 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.499713 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.499727 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.499747 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.499760 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.603175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.603241 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.603264 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.603294 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.603317 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.706501 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.706542 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.706579 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.706598 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.706611 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.810400 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.810476 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.810497 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.810525 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.810544 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.913596 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.913642 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.913655 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.913672 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.913686 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:16Z","lastTransitionTime":"2025-09-29T09:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.925457 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:16 crc kubenswrapper[4991]: E0929 09:39:16.925638 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.925911 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:16 crc kubenswrapper[4991]: E0929 09:39:16.926068 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:16 crc kubenswrapper[4991]: I0929 09:39:16.926192 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:16 crc kubenswrapper[4991]: E0929 09:39:16.926373 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.016272 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.016326 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.016339 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.016358 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.016374 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.119409 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.119454 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.119467 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.119485 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.119500 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.223015 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.223091 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.223114 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.223144 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.223164 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.326742 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.326801 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.326819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.326844 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.326867 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.430182 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.430238 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.430258 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.430283 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.430303 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.539200 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.539258 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.539270 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.539289 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.539302 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.643110 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.643165 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.643178 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.643202 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.643217 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.746414 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.746472 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.746484 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.746504 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.746516 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.849210 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.849256 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.849265 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.849282 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.849293 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.925619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:17 crc kubenswrapper[4991]: E0929 09:39:17.926597 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.927330 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:39:17 crc kubenswrapper[4991]: E0929 09:39:17.927763 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.952645 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.953038 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.953126 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.953236 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:17 crc kubenswrapper[4991]: I0929 09:39:17.953309 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:17Z","lastTransitionTime":"2025-09-29T09:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.056863 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.056938 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.056987 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.057020 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.057042 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.160105 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.160404 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.160524 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.160610 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.160969 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.263557 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.263595 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.263606 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.263624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.263635 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.366128 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.366179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.366199 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.366224 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.366237 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.469408 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.469760 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.469913 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.470076 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.470167 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.574841 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.574912 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.574993 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.575028 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.575052 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.678239 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.678301 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.678318 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.678341 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.678358 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.780528 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.780591 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.780616 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.780648 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.780671 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.883765 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.883824 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.883838 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.883858 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.883873 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.925890 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:18 crc kubenswrapper[4991]: E0929 09:39:18.926266 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.926357 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.926383 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:18 crc kubenswrapper[4991]: E0929 09:39:18.927005 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:18 crc kubenswrapper[4991]: E0929 09:39:18.927350 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.948852 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.988227 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.988287 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.988303 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.988327 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:18 crc kubenswrapper[4991]: I0929 09:39:18.988346 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:18Z","lastTransitionTime":"2025-09-29T09:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.091717 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.091794 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.091819 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.091852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.091880 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.195476 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.195530 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.195543 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.195560 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.195571 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.298175 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.298269 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.298293 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.298323 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.298347 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.401128 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.401179 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.401191 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.401210 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.401223 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.504807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.504854 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.504863 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.504877 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.504887 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.608562 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.608634 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.608644 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.608662 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.608674 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.712239 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.712297 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.712310 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.712326 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.712337 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.815812 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.815892 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.815909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.815934 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.815977 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.918728 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.918793 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.918807 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.918831 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.918847 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:19Z","lastTransitionTime":"2025-09-29T09:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:19 crc kubenswrapper[4991]: I0929 09:39:19.925579 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:19 crc kubenswrapper[4991]: E0929 09:39:19.925755 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.020992 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.021040 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.021053 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.021070 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.021084 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.124304 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.124411 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.124437 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.124474 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.124498 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.227572 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.227618 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.227628 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.227644 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.227653 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.330633 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.330691 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.330701 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.330719 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.330730 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.434483 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.434527 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.434537 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.434556 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.434572 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.538311 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.538381 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.538403 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.538431 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.538448 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.641995 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.642084 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.642111 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.642147 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.642173 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.745043 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.745089 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.745102 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.745118 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.745130 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.847736 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.847852 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.847878 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.847908 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.848616 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.926218 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.926284 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.926218 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:20 crc kubenswrapper[4991]: E0929 09:39:20.926461 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:20 crc kubenswrapper[4991]: E0929 09:39:20.926630 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:20 crc kubenswrapper[4991]: E0929 09:39:20.927074 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.951519 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.951569 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.951607 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.951624 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:20 crc kubenswrapper[4991]: I0929 09:39:20.951639 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:20Z","lastTransitionTime":"2025-09-29T09:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.054796 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.054862 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.054880 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.054912 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.054933 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.157735 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.157793 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.157808 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.157828 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.157840 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.260494 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.260524 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.260532 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.260546 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.260555 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.364124 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.364189 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.364209 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.364241 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.364265 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.466427 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.466495 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.466520 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.466545 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.466563 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.569617 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.569909 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.569999 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.570080 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.570147 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.673266 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.673315 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.673339 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.673366 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.673380 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.775569 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.775985 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.776109 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.776222 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.776307 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.878891 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.879256 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.879354 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.879448 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.879594 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.926093 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:21 crc kubenswrapper[4991]: E0929 09:39:21.926286 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.982147 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.982205 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.982219 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.982242 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:21 crc kubenswrapper[4991]: I0929 09:39:21.982254 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:21Z","lastTransitionTime":"2025-09-29T09:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.089868 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.089926 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.089947 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.090012 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.090043 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:22Z","lastTransitionTime":"2025-09-29T09:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.192708 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.192776 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.192791 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.192815 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.192831 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:22Z","lastTransitionTime":"2025-09-29T09:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.211490 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.211543 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.211557 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.211582 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.211596 4991 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T09:39:22Z","lastTransitionTime":"2025-09-29T09:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.269198 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r"] Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.269989 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.278495 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.278516 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.278585 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.281099 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.295615 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.295584318 podStartE2EDuration="1m13.295584318s" podCreationTimestamp="2025-09-29 09:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.295492115 +0000 UTC m=+98.151420153" watchObservedRunningTime="2025-09-29 09:39:22.295584318 +0000 UTC m=+98.151512346" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.326631 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.326601297 podStartE2EDuration="44.326601297s" podCreationTimestamp="2025-09-29 09:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.312716668 +0000 UTC m=+98.168644736" watchObservedRunningTime="2025-09-29 09:39:22.326601297 +0000 UTC m=+98.182529325" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.350366 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podStartSLOduration=77.350334192 podStartE2EDuration="1m17.350334192s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.327444151 +0000 UTC m=+98.183372219" watchObservedRunningTime="2025-09-29 09:39:22.350334192 +0000 UTC m=+98.206262260" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.368254 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mm67g" podStartSLOduration=77.368227584 podStartE2EDuration="1m17.368227584s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.351393432 +0000 UTC m=+98.207321500" watchObservedRunningTime="2025-09-29 09:39:22.368227584 +0000 UTC m=+98.224155652" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.388215 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-td7ll" podStartSLOduration=78.388189583 podStartE2EDuration="1m18.388189583s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.369227522 +0000 UTC m=+98.225155590" watchObservedRunningTime="2025-09-29 09:39:22.388189583 +0000 UTC m=+98.244117661" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.388568 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-fffvr" podStartSLOduration=77.388559584 podStartE2EDuration="1m17.388559584s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.387196905 +0000 UTC m=+98.243124993" watchObservedRunningTime="2025-09-29 09:39:22.388559584 +0000 UTC m=+98.244487652" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.404315 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.404284224 podStartE2EDuration="30.404284224s" podCreationTimestamp="2025-09-29 09:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.404087729 +0000 UTC m=+98.260015757" watchObservedRunningTime="2025-09-29 09:39:22.404284224 +0000 UTC m=+98.260212252" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.407877 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.407929 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b1edf69-5c87-44b0-a649-df7b7f1e6452-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.407998 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b1edf69-5c87-44b0-a649-df7b7f1e6452-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.408040 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.408070 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1edf69-5c87-44b0-a649-df7b7f1e6452-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.436858 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.436822936 podStartE2EDuration="1m16.436822936s" podCreationTimestamp="2025-09-29 09:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.420476258 +0000 UTC m=+98.276404286" watchObservedRunningTime="2025-09-29 09:39:22.436822936 +0000 UTC m=+98.292750974" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.479226 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mwncr" podStartSLOduration=77.479205894 podStartE2EDuration="1m17.479205894s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.476071707 +0000 UTC m=+98.331999745" watchObservedRunningTime="2025-09-29 09:39:22.479205894 +0000 UTC m=+98.335133922" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.508786 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b1edf69-5c87-44b0-a649-df7b7f1e6452-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.508862 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b1edf69-5c87-44b0-a649-df7b7f1e6452-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.508924 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.508979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1edf69-5c87-44b0-a649-df7b7f1e6452-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.509049 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.509139 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.509183 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b1edf69-5c87-44b0-a649-df7b7f1e6452-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.510638 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b1edf69-5c87-44b0-a649-df7b7f1e6452-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.517369 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b1edf69-5c87-44b0-a649-df7b7f1e6452-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.529153 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b1edf69-5c87-44b0-a649-df7b7f1e6452-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk46r\" (UID: \"3b1edf69-5c87-44b0-a649-df7b7f1e6452\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.545328 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hmmqv" podStartSLOduration=78.545300827 podStartE2EDuration="1m18.545300827s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.544980718 +0000 UTC m=+98.400908766" watchObservedRunningTime="2025-09-29 09:39:22.545300827 +0000 UTC m=+98.401228855" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.594774 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.623906 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.6238912800000005 podStartE2EDuration="4.62389128s" podCreationTimestamp="2025-09-29 09:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:22.619981 +0000 UTC m=+98.475909028" watchObservedRunningTime="2025-09-29 09:39:22.62389128 +0000 UTC m=+98.479819308" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.926190 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.926322 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:22 crc kubenswrapper[4991]: E0929 09:39:22.926391 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:22 crc kubenswrapper[4991]: I0929 09:39:22.926469 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:22 crc kubenswrapper[4991]: E0929 09:39:22.926574 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:22 crc kubenswrapper[4991]: E0929 09:39:22.926684 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:23 crc kubenswrapper[4991]: I0929 09:39:23.579641 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" event={"ID":"3b1edf69-5c87-44b0-a649-df7b7f1e6452","Type":"ContainerStarted","Data":"5aed1d25510cde4363058f36d62d6bdd9fbafa8ff6066ba88796b63fb7548e64"} Sep 29 09:39:23 crc kubenswrapper[4991]: I0929 09:39:23.579729 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" event={"ID":"3b1edf69-5c87-44b0-a649-df7b7f1e6452","Type":"ContainerStarted","Data":"60374cf9c16047c4d965cd797b0d054b2083a0fd2574a71804ba91e861cf7969"} Sep 29 09:39:23 crc kubenswrapper[4991]: I0929 09:39:23.602533 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk46r" podStartSLOduration=78.602512771 podStartE2EDuration="1m18.602512771s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:23.602050638 +0000 UTC m=+99.457978716" watchObservedRunningTime="2025-09-29 09:39:23.602512771 +0000 UTC m=+99.458440819" Sep 29 09:39:23 crc kubenswrapper[4991]: I0929 09:39:23.822720 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:23 crc kubenswrapper[4991]: E0929 09:39:23.822963 4991 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:39:23 crc kubenswrapper[4991]: E0929 09:39:23.823058 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs podName:48c35818-43cb-4bbf-bf05-37fd375d0d70 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:27.823033152 +0000 UTC m=+163.678961180 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs") pod "network-metrics-daemon-7m5sp" (UID: "48c35818-43cb-4bbf-bf05-37fd375d0d70") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 29 09:39:23 crc kubenswrapper[4991]: I0929 09:39:23.925987 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:23 crc kubenswrapper[4991]: E0929 09:39:23.926286 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:24 crc kubenswrapper[4991]: I0929 09:39:24.925715 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:24 crc kubenswrapper[4991]: I0929 09:39:24.925802 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:24 crc kubenswrapper[4991]: E0929 09:39:24.925885 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:24 crc kubenswrapper[4991]: E0929 09:39:24.928107 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:24 crc kubenswrapper[4991]: I0929 09:39:24.928272 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:24 crc kubenswrapper[4991]: E0929 09:39:24.928375 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:25 crc kubenswrapper[4991]: I0929 09:39:25.926017 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:25 crc kubenswrapper[4991]: E0929 09:39:25.926142 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:26 crc kubenswrapper[4991]: I0929 09:39:26.925605 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:26 crc kubenswrapper[4991]: I0929 09:39:26.925623 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:26 crc kubenswrapper[4991]: E0929 09:39:26.925784 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:26 crc kubenswrapper[4991]: E0929 09:39:26.925888 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:26 crc kubenswrapper[4991]: I0929 09:39:26.926092 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:26 crc kubenswrapper[4991]: E0929 09:39:26.926289 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:27 crc kubenswrapper[4991]: I0929 09:39:27.925448 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:27 crc kubenswrapper[4991]: E0929 09:39:27.925648 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:28 crc kubenswrapper[4991]: I0929 09:39:28.925360 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:28 crc kubenswrapper[4991]: I0929 09:39:28.925442 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:28 crc kubenswrapper[4991]: I0929 09:39:28.925361 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:28 crc kubenswrapper[4991]: E0929 09:39:28.925574 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:28 crc kubenswrapper[4991]: E0929 09:39:28.925711 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:28 crc kubenswrapper[4991]: E0929 09:39:28.925794 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:29 crc kubenswrapper[4991]: I0929 09:39:29.925772 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:29 crc kubenswrapper[4991]: E0929 09:39:29.926198 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:30 crc kubenswrapper[4991]: I0929 09:39:30.925585 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:30 crc kubenswrapper[4991]: I0929 09:39:30.925665 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:30 crc kubenswrapper[4991]: I0929 09:39:30.925776 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:30 crc kubenswrapper[4991]: E0929 09:39:30.925975 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:30 crc kubenswrapper[4991]: E0929 09:39:30.926142 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:30 crc kubenswrapper[4991]: E0929 09:39:30.926536 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:30 crc kubenswrapper[4991]: I0929 09:39:30.926906 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:39:30 crc kubenswrapper[4991]: E0929 09:39:30.927138 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h4hm4_openshift-ovn-kubernetes(14c96fef-6218-4f25-8f81-f7adc934b0d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" Sep 29 09:39:31 crc kubenswrapper[4991]: I0929 09:39:31.926030 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:31 crc kubenswrapper[4991]: E0929 09:39:31.926296 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:32 crc kubenswrapper[4991]: I0929 09:39:32.925520 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:32 crc kubenswrapper[4991]: I0929 09:39:32.925606 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:32 crc kubenswrapper[4991]: I0929 09:39:32.925627 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:32 crc kubenswrapper[4991]: E0929 09:39:32.925736 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:32 crc kubenswrapper[4991]: E0929 09:39:32.925897 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:32 crc kubenswrapper[4991]: E0929 09:39:32.926180 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:33 crc kubenswrapper[4991]: I0929 09:39:33.925510 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:33 crc kubenswrapper[4991]: E0929 09:39:33.925966 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:34 crc kubenswrapper[4991]: I0929 09:39:34.925788 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:34 crc kubenswrapper[4991]: E0929 09:39:34.925935 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:34 crc kubenswrapper[4991]: I0929 09:39:34.925791 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:34 crc kubenswrapper[4991]: I0929 09:39:34.925797 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:34 crc kubenswrapper[4991]: E0929 09:39:34.929407 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:34 crc kubenswrapper[4991]: E0929 09:39:34.929640 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:35 crc kubenswrapper[4991]: I0929 09:39:35.925455 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:35 crc kubenswrapper[4991]: E0929 09:39:35.925636 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:36 crc kubenswrapper[4991]: I0929 09:39:36.925884 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:36 crc kubenswrapper[4991]: I0929 09:39:36.926034 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:36 crc kubenswrapper[4991]: E0929 09:39:36.926119 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:36 crc kubenswrapper[4991]: I0929 09:39:36.925884 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:36 crc kubenswrapper[4991]: E0929 09:39:36.926207 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:36 crc kubenswrapper[4991]: E0929 09:39:36.926472 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:37 crc kubenswrapper[4991]: I0929 09:39:37.926339 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:37 crc kubenswrapper[4991]: E0929 09:39:37.926572 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:38 crc kubenswrapper[4991]: I0929 09:39:38.925588 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:38 crc kubenswrapper[4991]: I0929 09:39:38.925714 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:38 crc kubenswrapper[4991]: E0929 09:39:38.925755 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:38 crc kubenswrapper[4991]: I0929 09:39:38.925792 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:38 crc kubenswrapper[4991]: E0929 09:39:38.925915 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:38 crc kubenswrapper[4991]: E0929 09:39:38.926105 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.636177 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/1.log" Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.637344 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/0.log" Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.637416 4991 generic.go:334] "Generic (PLEG): container finished" podID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" containerID="93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0" exitCode=1 Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.637461 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerDied","Data":"93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0"} Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.637515 4991 scope.go:117] "RemoveContainer" containerID="17a3fef06c930f281bcdddeea3fc6e2ca6dbb1889489cdb6b3902cae1c7f836c" Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.638409 4991 scope.go:117] "RemoveContainer" containerID="93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0" Sep 29 09:39:39 crc kubenswrapper[4991]: E0929 09:39:39.640782 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mm67g_openshift-multus(f36a89bf-ee7b-4bf7-bc61-9ea099661bd1)\"" pod="openshift-multus/multus-mm67g" podUID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" Sep 29 09:39:39 crc kubenswrapper[4991]: I0929 09:39:39.925988 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:39 crc kubenswrapper[4991]: E0929 09:39:39.926187 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:40 crc kubenswrapper[4991]: I0929 09:39:40.642326 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/1.log" Sep 29 09:39:40 crc kubenswrapper[4991]: I0929 09:39:40.925585 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:40 crc kubenswrapper[4991]: I0929 09:39:40.925656 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:40 crc kubenswrapper[4991]: E0929 09:39:40.925749 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:40 crc kubenswrapper[4991]: E0929 09:39:40.925843 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:40 crc kubenswrapper[4991]: I0929 09:39:40.925940 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:40 crc kubenswrapper[4991]: E0929 09:39:40.926231 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:41 crc kubenswrapper[4991]: I0929 09:39:41.925932 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:41 crc kubenswrapper[4991]: E0929 09:39:41.926873 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:42 crc kubenswrapper[4991]: I0929 09:39:42.925282 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:42 crc kubenswrapper[4991]: I0929 09:39:42.925372 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:42 crc kubenswrapper[4991]: I0929 09:39:42.925328 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:42 crc kubenswrapper[4991]: E0929 09:39:42.925831 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:42 crc kubenswrapper[4991]: I0929 09:39:42.926173 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:39:42 crc kubenswrapper[4991]: E0929 09:39:42.926221 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:42 crc kubenswrapper[4991]: E0929 09:39:42.926525 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.654765 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/3.log" Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.657434 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerStarted","Data":"156a735b6aa5d10b99e6e4177ae054bed904a9b1c86784ae54a887fa808701b2"} Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.658146 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.686117 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podStartSLOduration=98.68609633 podStartE2EDuration="1m38.68609633s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:39:43.684884706 +0000 UTC m=+119.540812754" watchObservedRunningTime="2025-09-29 09:39:43.68609633 +0000 UTC m=+119.542024358" Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.766981 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7m5sp"] Sep 29 09:39:43 crc kubenswrapper[4991]: I0929 09:39:43.768175 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:43 crc kubenswrapper[4991]: E0929 09:39:43.768338 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:44 crc kubenswrapper[4991]: E0929 09:39:44.905890 4991 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Sep 29 09:39:44 crc kubenswrapper[4991]: I0929 09:39:44.925637 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:44 crc kubenswrapper[4991]: I0929 09:39:44.925865 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:44 crc kubenswrapper[4991]: E0929 09:39:44.926872 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:44 crc kubenswrapper[4991]: I0929 09:39:44.927087 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:44 crc kubenswrapper[4991]: E0929 09:39:44.927165 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:44 crc kubenswrapper[4991]: E0929 09:39:44.927424 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:45 crc kubenswrapper[4991]: E0929 09:39:45.031914 4991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:39:45 crc kubenswrapper[4991]: I0929 09:39:45.925735 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:45 crc kubenswrapper[4991]: E0929 09:39:45.926006 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:46 crc kubenswrapper[4991]: I0929 09:39:46.926079 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:46 crc kubenswrapper[4991]: I0929 09:39:46.926092 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:46 crc kubenswrapper[4991]: I0929 09:39:46.926116 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:46 crc kubenswrapper[4991]: E0929 09:39:46.926300 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:46 crc kubenswrapper[4991]: E0929 09:39:46.926607 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:46 crc kubenswrapper[4991]: E0929 09:39:46.926693 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:47 crc kubenswrapper[4991]: I0929 09:39:47.926233 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:47 crc kubenswrapper[4991]: E0929 09:39:47.926402 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:48 crc kubenswrapper[4991]: I0929 09:39:48.925360 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:48 crc kubenswrapper[4991]: I0929 09:39:48.925373 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:48 crc kubenswrapper[4991]: E0929 09:39:48.925821 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:48 crc kubenswrapper[4991]: E0929 09:39:48.926141 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:48 crc kubenswrapper[4991]: I0929 09:39:48.926178 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:48 crc kubenswrapper[4991]: E0929 09:39:48.926309 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:49 crc kubenswrapper[4991]: I0929 09:39:49.925499 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:49 crc kubenswrapper[4991]: E0929 09:39:49.925680 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:50 crc kubenswrapper[4991]: E0929 09:39:50.033244 4991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:39:50 crc kubenswrapper[4991]: I0929 09:39:50.926161 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:50 crc kubenswrapper[4991]: I0929 09:39:50.926230 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:50 crc kubenswrapper[4991]: I0929 09:39:50.926161 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:50 crc kubenswrapper[4991]: E0929 09:39:50.926487 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:50 crc kubenswrapper[4991]: E0929 09:39:50.926594 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:50 crc kubenswrapper[4991]: E0929 09:39:50.926708 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:51 crc kubenswrapper[4991]: I0929 09:39:51.926138 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:51 crc kubenswrapper[4991]: E0929 09:39:51.926793 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:52 crc kubenswrapper[4991]: I0929 09:39:52.926128 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:52 crc kubenswrapper[4991]: I0929 09:39:52.926145 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:52 crc kubenswrapper[4991]: E0929 09:39:52.926480 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:52 crc kubenswrapper[4991]: I0929 09:39:52.926169 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:52 crc kubenswrapper[4991]: E0929 09:39:52.926604 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:52 crc kubenswrapper[4991]: E0929 09:39:52.926717 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:53 crc kubenswrapper[4991]: I0929 09:39:53.925807 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:53 crc kubenswrapper[4991]: E0929 09:39:53.926150 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:53 crc kubenswrapper[4991]: I0929 09:39:53.926406 4991 scope.go:117] "RemoveContainer" containerID="93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0" Sep 29 09:39:54 crc kubenswrapper[4991]: I0929 09:39:54.706667 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/1.log" Sep 29 09:39:54 crc kubenswrapper[4991]: I0929 09:39:54.706738 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerStarted","Data":"216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f"} Sep 29 09:39:54 crc kubenswrapper[4991]: I0929 09:39:54.927369 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:54 crc kubenswrapper[4991]: E0929 09:39:54.927508 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:54 crc kubenswrapper[4991]: I0929 09:39:54.927762 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:54 crc kubenswrapper[4991]: E0929 09:39:54.927839 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:54 crc kubenswrapper[4991]: I0929 09:39:54.928140 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:54 crc kubenswrapper[4991]: E0929 09:39:54.928230 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:55 crc kubenswrapper[4991]: E0929 09:39:55.033917 4991 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:39:55 crc kubenswrapper[4991]: I0929 09:39:55.925211 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:55 crc kubenswrapper[4991]: E0929 09:39:55.925394 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:56 crc kubenswrapper[4991]: I0929 09:39:56.926201 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:56 crc kubenswrapper[4991]: I0929 09:39:56.926270 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:56 crc kubenswrapper[4991]: E0929 09:39:56.926437 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:56 crc kubenswrapper[4991]: E0929 09:39:56.926715 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:56 crc kubenswrapper[4991]: I0929 09:39:56.926772 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:56 crc kubenswrapper[4991]: E0929 09:39:56.926936 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:57 crc kubenswrapper[4991]: I0929 09:39:57.925674 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:57 crc kubenswrapper[4991]: E0929 09:39:57.925870 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:39:58 crc kubenswrapper[4991]: I0929 09:39:58.926219 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:39:58 crc kubenswrapper[4991]: I0929 09:39:58.926219 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:39:58 crc kubenswrapper[4991]: E0929 09:39:58.926435 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 29 09:39:58 crc kubenswrapper[4991]: E0929 09:39:58.926568 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 29 09:39:58 crc kubenswrapper[4991]: I0929 09:39:58.926255 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:39:58 crc kubenswrapper[4991]: E0929 09:39:58.926744 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 29 09:39:59 crc kubenswrapper[4991]: I0929 09:39:59.925442 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:39:59 crc kubenswrapper[4991]: E0929 09:39:59.925648 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7m5sp" podUID="48c35818-43cb-4bbf-bf05-37fd375d0d70" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.074313 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.925647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.925756 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.925827 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.930011 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.930818 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.930886 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Sep 29 09:40:00 crc kubenswrapper[4991]: I0929 09:40:00.932408 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Sep 29 09:40:01 crc kubenswrapper[4991]: I0929 09:40:01.925345 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:40:01 crc kubenswrapper[4991]: I0929 09:40:01.928922 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Sep 29 09:40:01 crc kubenswrapper[4991]: I0929 09:40:01.928923 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.300409 4991 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.359184 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.360088 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4x2n"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.360647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.360726 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.361376 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.363504 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.364008 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.367259 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.367554 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.364313 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: W0929 09:40:03.368361 4991 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Sep 29 09:40:03 crc kubenswrapper[4991]: E0929 09:40:03.368422 4991 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 29 09:40:03 crc kubenswrapper[4991]: W0929 09:40:03.368853 4991 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Sep 29 09:40:03 crc kubenswrapper[4991]: E0929 09:40:03.368896 4991 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.364379 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: W0929 09:40:03.369111 4991 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Sep 29 09:40:03 crc kubenswrapper[4991]: E0929 09:40:03.369130 4991 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 29 09:40:03 crc kubenswrapper[4991]: W0929 09:40:03.369174 4991 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Sep 29 09:40:03 crc kubenswrapper[4991]: E0929 09:40:03.369186 4991 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.369190 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.369321 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.369768 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78v2g"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.370130 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jqqrh"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.370420 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqxb6"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.370860 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.371929 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.372136 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.372165 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.372228 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.372764 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.372932 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.373236 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.373318 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.381587 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.392039 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.392446 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.392613 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.392932 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.393046 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.393330 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.393341 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.394493 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.395312 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.397236 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.397626 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.398173 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.398814 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.399012 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.399168 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428189 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428384 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428387 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b760848b-cfca-4339-94de-d3a74132a24f-serving-cert\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428425 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngccf\" (UniqueName: \"kubernetes.io/projected/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-kube-api-access-ngccf\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428449 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428468 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428504 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428524 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99hx\" (UniqueName: \"kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428545 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428564 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428580 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-dir\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428596 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428622 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-node-pullsecrets\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428657 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-client\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428685 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428706 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428744 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-policies\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428760 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428780 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428805 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-service-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428820 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd25s\" (UniqueName: \"kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428837 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkcz\" (UniqueName: \"kubernetes.io/projected/95b7767c-0016-4b14-aed1-824548b4d6da-kube-api-access-4kkcz\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428857 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428874 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b7767c-0016-4b14-aed1-824548b4d6da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428894 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428911 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428925 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428971 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-config\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.428993 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-config\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429011 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-encryption-config\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429028 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429042 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429084 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429159 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429142 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-serving-cert\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429229 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429302 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429303 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429494 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429500 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429577 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429635 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429724 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.429989 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430080 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430102 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430320 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430347 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430160 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430563 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430591 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw889\" (UniqueName: \"kubernetes.io/projected/b760848b-cfca-4339-94de-d3a74132a24f-kube-api-access-sw889\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430608 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b7767c-0016-4b14-aed1-824548b4d6da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430694 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430717 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430786 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.430806 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.431180 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.431565 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.431812 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.434625 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.435719 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.435846 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441135 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441339 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.436040 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441534 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438690 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441839 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438783 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.442096 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.442051 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438829 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438868 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438894 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438921 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.442887 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.438920 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439038 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439055 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439124 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439213 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439248 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439284 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439325 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439365 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439390 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439437 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439445 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439489 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439597 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439846 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.439986 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.440034 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.440097 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.440141 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.440974 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441040 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441729 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.441992 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.447570 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.453435 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.454532 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.457400 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.458874 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.459838 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t8m7"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.461455 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.462004 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.463681 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.488966 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7fnch"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.489734 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpc86"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.490523 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.490587 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.491138 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.492191 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.505195 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.508581 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.509998 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2gfk"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.510470 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.510556 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.510687 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.511123 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.511395 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.511543 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.511589 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.512529 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.514547 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.515137 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516015 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516208 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516236 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516333 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516370 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516537 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516701 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516732 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516750 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516836 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516861 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.516915 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.517220 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.519267 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.520069 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.520248 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.521017 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.521040 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wcmnr"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.523278 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.530966 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.531380 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.532233 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.533253 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.533643 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.536939 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.537027 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.537844 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.537872 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.537909 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.537986 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzxs6\" (UniqueName: \"kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538016 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/91e5a020-6f2d-4996-8af6-8f5541ce12b1-machine-approver-tls\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538043 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538070 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538101 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-srv-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538908 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538996 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b760848b-cfca-4339-94de-d3a74132a24f-serving-cert\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.538774 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539056 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-image-import-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539264 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngccf\" (UniqueName: \"kubernetes.io/projected/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-kube-api-access-ngccf\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539307 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539355 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539387 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-serving-cert\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539414 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539442 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99hx\" (UniqueName: \"kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.539493 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.540069 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.540267 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznvg\" (UniqueName: \"kubernetes.io/projected/d949b064-ca4d-454e-a0d9-2aa9dd40d4e1-kube-api-access-nznvg\") pod \"downloads-7954f5f757-jqqrh\" (UID: \"d949b064-ca4d-454e-a0d9-2aa9dd40d4e1\") " pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.540847 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.541764 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.543345 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.543394 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-auth-proxy-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.543608 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.544413 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545253 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545409 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-dir\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545502 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545612 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545724 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545824 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.545924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.546038 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-node-pullsecrets\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.546175 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-client\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.546303 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dhmk\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-kube-api-access-8dhmk\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.546404 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.547985 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548123 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548225 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548315 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548407 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548504 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-policies\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548601 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548699 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548819 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-service-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.548922 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd25s\" (UniqueName: \"kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549021 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kkcz\" (UniqueName: \"kubernetes.io/projected/95b7767c-0016-4b14-aed1-824548b4d6da-kube-api-access-4kkcz\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549122 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-client\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549223 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549321 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b7767c-0016-4b14-aed1-824548b4d6da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549424 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549516 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549604 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549693 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqvm\" (UniqueName: \"kubernetes.io/projected/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-kube-api-access-whqvm\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.544860 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.550449 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-config\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.547410 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-node-pullsecrets\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.552253 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b760848b-cfca-4339-94de-d3a74132a24f-serving-cert\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.552399 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.552488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b7767c-0016-4b14-aed1-824548b4d6da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.544604 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.553275 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.553388 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.547303 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.547041 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-dir\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.553738 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.553762 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-audit-policies\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.546984 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.553842 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.549798 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-config\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554097 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-encryption-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554134 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-config\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554163 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554189 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-encryption-config\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554213 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554234 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit-dir\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554259 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554281 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czkr\" (UniqueName: \"kubernetes.io/projected/f73e1250-00a9-4975-9b10-21b456ad2ed0-kube-api-access-6czkr\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554315 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dk4\" (UniqueName: \"kubernetes.io/projected/91e5a020-6f2d-4996-8af6-8f5541ce12b1-kube-api-access-w9dk4\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554343 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554367 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554396 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-serving-cert\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554716 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554754 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw889\" (UniqueName: \"kubernetes.io/projected/b760848b-cfca-4339-94de-d3a74132a24f-kube-api-access-sw889\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554782 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b7767c-0016-4b14-aed1-824548b4d6da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.554841 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b760848b-cfca-4339-94de-d3a74132a24f-service-ca-bundle\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.555308 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.556759 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.557151 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-etcd-client\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.558057 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.558437 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.558534 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.558876 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.560534 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.560716 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.560819 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-encryption-config\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.561147 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.561395 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b7767c-0016-4b14-aed1-824548b4d6da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.561796 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.561890 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.562369 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-config\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.562758 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.563171 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-serving-cert\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.568811 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.575443 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.578462 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.585683 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.587055 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.589890 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xvz8h"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.591239 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.591755 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.591940 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.592166 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.596333 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.600101 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4v4rm"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.600295 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.605727 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.606103 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.606894 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607675 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607705 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607753 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607765 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4x2n"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607776 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jqqrh"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607785 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607793 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607866 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.607978 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608020 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608407 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608428 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t8m7"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608438 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7fnch"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608449 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wcmnr"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608480 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d5tqb"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.608913 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.610210 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.619021 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqxb6"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.619161 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.622988 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.627531 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.627735 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.631111 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.634849 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78v2g"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.635698 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.637146 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.642727 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.644274 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.644289 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.652250 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.654154 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.655690 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.655788 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/875831c4-1354-40bb-9b36-f6f2a6b6d79c-proxy-tls\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.655876 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.655976 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656063 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwckm\" (UniqueName: \"kubernetes.io/projected/c975ee4e-229f-417c-ae32-d56eae89c4c1-kube-api-access-zwckm\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656138 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c975ee4e-229f-417c-ae32-d56eae89c4c1-metrics-tls\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656207 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656299 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-client\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656375 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-serving-cert\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656444 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shklh\" (UniqueName: \"kubernetes.io/projected/875831c4-1354-40bb-9b36-f6f2a6b6d79c-kube-api-access-shklh\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656510 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-client\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656580 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656699 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m5krf"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.656624 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.657725 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-serving-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.657796 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6w94h"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.658515 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2gfk"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.658655 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.658799 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.658666 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqvm\" (UniqueName: \"kubernetes.io/projected/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-kube-api-access-whqvm\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659275 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-encryption-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659325 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659358 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659387 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit-dir\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659412 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659443 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czkr\" (UniqueName: \"kubernetes.io/projected/f73e1250-00a9-4975-9b10-21b456ad2ed0-kube-api-access-6czkr\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659474 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dk4\" (UniqueName: \"kubernetes.io/projected/91e5a020-6f2d-4996-8af6-8f5541ce12b1-kube-api-access-w9dk4\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659562 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-service-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659593 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/875831c4-1354-40bb-9b36-f6f2a6b6d79c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659620 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659647 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzxs6\" (UniqueName: \"kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659672 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/91e5a020-6f2d-4996-8af6-8f5541ce12b1-machine-approver-tls\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659692 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659715 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-config\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659744 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-srv-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659763 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659832 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-image-import-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659877 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-serving-cert\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659909 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nznvg\" (UniqueName: \"kubernetes.io/projected/d949b064-ca4d-454e-a0d9-2aa9dd40d4e1-kube-api-access-nznvg\") pod \"downloads-7954f5f757-jqqrh\" (UID: \"d949b064-ca4d-454e-a0d9-2aa9dd40d4e1\") " pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659933 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-auth-proxy-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.659979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660004 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vl5w\" (UniqueName: \"kubernetes.io/projected/e34824e5-4b59-4bae-b803-39679d6eab2e-kube-api-access-5vl5w\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660032 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660055 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660078 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660105 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8288n\" (UniqueName: \"kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660132 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660160 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660210 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dhmk\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-kube-api-access-8dhmk\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.660794 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-audit-dir\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.661491 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.661630 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-image-import-ca\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.661643 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.662053 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.662112 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.662176 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-encryption-config\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.662647 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.662819 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91e5a020-6f2d-4996-8af6-8f5541ce12b1-auth-proxy-config\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.663559 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.663769 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.663998 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-etcd-client\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.664574 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-srv-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.665103 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-serving-cert\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.665283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.665370 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/91e5a020-6f2d-4996-8af6-8f5541ce12b1-machine-approver-tls\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.665550 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.666360 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.666553 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.667460 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f73e1250-00a9-4975-9b10-21b456ad2ed0-profile-collector-cert\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.667909 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.669777 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpc86"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.670600 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d5tqb"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.671858 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.673147 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.676688 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.678348 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4v4rm"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.678383 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.679662 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.681918 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m5krf"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.683379 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.683725 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.685616 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.688484 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gm965"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.691981 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gm965"] Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.692126 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.703557 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.723525 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.744367 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761107 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/875831c4-1354-40bb-9b36-f6f2a6b6d79c-proxy-tls\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761148 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwckm\" (UniqueName: \"kubernetes.io/projected/c975ee4e-229f-417c-ae32-d56eae89c4c1-kube-api-access-zwckm\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761174 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c975ee4e-229f-417c-ae32-d56eae89c4c1-metrics-tls\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761214 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761266 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-serving-cert\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761286 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-client\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761303 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shklh\" (UniqueName: \"kubernetes.io/projected/875831c4-1354-40bb-9b36-f6f2a6b6d79c-kube-api-access-shklh\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761329 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761401 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761446 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-service-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761467 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/875831c4-1354-40bb-9b36-f6f2a6b6d79c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761494 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761514 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-config\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761540 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761595 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vl5w\" (UniqueName: \"kubernetes.io/projected/e34824e5-4b59-4bae-b803-39679d6eab2e-kube-api-access-5vl5w\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761614 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8288n\" (UniqueName: \"kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.761630 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.762112 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-config\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.762313 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-config\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.762813 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/875831c4-1354-40bb-9b36-f6f2a6b6d79c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.764108 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-client\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.765005 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.765208 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e34824e5-4b59-4bae-b803-39679d6eab2e-serving-cert\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.767001 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.772761 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-service-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.784452 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.792440 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e34824e5-4b59-4bae-b803-39679d6eab2e-etcd-ca\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.803665 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.823976 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.844143 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.884040 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.903475 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.924123 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.936271 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.944224 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.971753 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.983722 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:03 crc kubenswrapper[4991]: I0929 09:40:03.983944 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.002978 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.023443 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.043478 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.055632 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c975ee4e-229f-417c-ae32-d56eae89c4c1-metrics-tls\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.063015 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.085724 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.103751 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.125030 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.137496 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/875831c4-1354-40bb-9b36-f6f2a6b6d79c-proxy-tls\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.146290 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.165014 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.184253 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.224819 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.244439 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.266163 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.304575 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.311716 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngccf\" (UniqueName: \"kubernetes.io/projected/24730e1e-1b87-4a46-8f01-6cd71ee44f3e-kube-api-access-ngccf\") pod \"apiserver-7bbb656c7d-x6zph\" (UID: \"24730e1e-1b87-4a46-8f01-6cd71ee44f3e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.323512 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.344743 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.384005 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.405413 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.413979 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.424167 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.444648 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.480618 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd25s\" (UniqueName: \"kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s\") pod \"oauth-openshift-558db77b4-f9tdw\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.499967 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kkcz\" (UniqueName: \"kubernetes.io/projected/95b7767c-0016-4b14-aed1-824548b4d6da-kube-api-access-4kkcz\") pod \"openshift-apiserver-operator-796bbdcf4f-2k9w4\" (UID: \"95b7767c-0016-4b14-aed1-824548b4d6da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.536871 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.542231 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw889\" (UniqueName: \"kubernetes.io/projected/b760848b-cfca-4339-94de-d3a74132a24f-kube-api-access-sw889\") pod \"authentication-operator-69f744f599-78v2g\" (UID: \"b760848b-cfca-4339-94de-d3a74132a24f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.546311 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Sep 29 09:40:04 crc kubenswrapper[4991]: E0929 09:40:04.558654 4991 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:04 crc kubenswrapper[4991]: E0929 09:40:04.558763 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images podName:34f20ab2-1d2b-4a17-923b-ad9151c86dcf nodeName:}" failed. No retries permitted until 2025-09-29 09:40:05.058741926 +0000 UTC m=+140.914669954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images") pod "machine-api-operator-5694c8668f-r4x2n" (UID: "34f20ab2-1d2b-4a17-923b-ad9151c86dcf") : failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.564137 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.587414 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.602268 4991 request.go:700] Waited for 1.009496579s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.605204 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.624557 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.649075 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.664117 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph"] Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.665210 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.665476 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.685838 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.704392 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.724932 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.725787 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:40:04 crc kubenswrapper[4991]: W0929 09:40:04.732667 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304e7533_e5b5_4db1_8480_3b7cf9b4d58d.slice/crio-236b611dcf4f163b9dda06da7fff9dd7f09e31f227782a1c7ce281596271214b WatchSource:0}: Error finding container 236b611dcf4f163b9dda06da7fff9dd7f09e31f227782a1c7ce281596271214b: Status 404 returned error can't find the container with id 236b611dcf4f163b9dda06da7fff9dd7f09e31f227782a1c7ce281596271214b Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.743297 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.743576 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" event={"ID":"304e7533-e5b5-4db1-8480-3b7cf9b4d58d","Type":"ContainerStarted","Data":"236b611dcf4f163b9dda06da7fff9dd7f09e31f227782a1c7ce281596271214b"} Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.743738 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.744514 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" event={"ID":"24730e1e-1b87-4a46-8f01-6cd71ee44f3e","Type":"ContainerStarted","Data":"11d94fc169f501a21d05efd0e06d27426bd51f4c17cd46f211dd23c47b11b038"} Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.764211 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.783987 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.804536 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.825132 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.855237 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4"] Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.855756 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.863755 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.885960 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.904887 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.925103 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.951013 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.963596 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Sep 29 09:40:04 crc kubenswrapper[4991]: I0929 09:40:04.984486 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.005333 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.016383 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78v2g"] Sep 29 09:40:05 crc kubenswrapper[4991]: W0929 09:40:05.022830 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb760848b_cfca_4339_94de_d3a74132a24f.slice/crio-af7a9db498d5fdc7932aca78792caa13866e1a93c1dfb0cb676f0f574824dfff WatchSource:0}: Error finding container af7a9db498d5fdc7932aca78792caa13866e1a93c1dfb0cb676f0f574824dfff: Status 404 returned error can't find the container with id af7a9db498d5fdc7932aca78792caa13866e1a93c1dfb0cb676f0f574824dfff Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.024384 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.043788 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.064665 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.083621 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.090672 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.105061 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.123897 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.144191 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.165466 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.183930 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.204099 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.224257 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.243497 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.264076 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.285202 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.304844 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.324760 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.345494 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.380756 4991 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.380804 4991 projected.go:194] Error preparing data for projected volume kube-api-access-g99hx for pod openshift-machine-api/machine-api-operator-5694c8668f-r4x2n: failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.380917 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx podName:34f20ab2-1d2b-4a17-923b-ad9151c86dcf nodeName:}" failed. No retries permitted until 2025-09-29 09:40:05.880873005 +0000 UTC m=+141.736801033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g99hx" (UniqueName: "kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx") pod "machine-api-operator-5694c8668f-r4x2n" (UID: "34f20ab2-1d2b-4a17-923b-ad9151c86dcf") : failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.383411 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.383412 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqvm\" (UniqueName: \"kubernetes.io/projected/7c72b3f6-dc79-4c2c-a0e8-44c41ff60370-kube-api-access-whqvm\") pod \"apiserver-76f77b778f-tqxb6\" (UID: \"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370\") " pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.403967 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.423268 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.443804 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.464518 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.484789 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.519635 4991 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.519685 4991 projected.go:194] Error preparing data for projected volume kube-api-access-vt6xs for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d: failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.519786 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs podName:259d860c-d08f-4753-9e8b-f059fea942f5 nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.019754538 +0000 UTC m=+141.875682566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vt6xs" (UniqueName: "kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs") pod "route-controller-manager-6576b87f9c-q6v2d" (UID: "259d860c-d08f-4753-9e8b-f059fea942f5") : failed to sync configmap cache: timed out waiting for the condition Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.520799 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dhmk\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-kube-api-access-8dhmk\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.542906 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzxs6\" (UniqueName: \"kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6\") pod \"controller-manager-879f6c89f-dmrh5\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.551444 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.568490 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a0c1005c-2a71-47c1-8f5a-75fff673d6ce-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bsdb4\" (UID: \"a0c1005c-2a71-47c1-8f5a-75fff673d6ce\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.588749 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dk4\" (UniqueName: \"kubernetes.io/projected/91e5a020-6f2d-4996-8af6-8f5541ce12b1-kube-api-access-w9dk4\") pod \"machine-approver-56656f9798-kv7nf\" (UID: \"91e5a020-6f2d-4996-8af6-8f5541ce12b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.606503 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czkr\" (UniqueName: \"kubernetes.io/projected/f73e1250-00a9-4975-9b10-21b456ad2ed0-kube-api-access-6czkr\") pod \"catalog-operator-68c6474976-7wzsh\" (UID: \"f73e1250-00a9-4975-9b10-21b456ad2ed0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.620349 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nznvg\" (UniqueName: \"kubernetes.io/projected/d949b064-ca4d-454e-a0d9-2aa9dd40d4e1-kube-api-access-nznvg\") pod \"downloads-7954f5f757-jqqrh\" (UID: \"d949b064-ca4d-454e-a0d9-2aa9dd40d4e1\") " pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.622491 4991 request.go:700] Waited for 1.930025907s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.623083 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.624928 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.644614 4991 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.653660 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.666261 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.707051 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwckm\" (UniqueName: \"kubernetes.io/projected/c975ee4e-229f-417c-ae32-d56eae89c4c1-kube-api-access-zwckm\") pod \"dns-operator-744455d44c-wcmnr\" (UID: \"c975ee4e-229f-417c-ae32-d56eae89c4c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.713572 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.723763 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shklh\" (UniqueName: \"kubernetes.io/projected/875831c4-1354-40bb-9b36-f6f2a6b6d79c-kube-api-access-shklh\") pod \"machine-config-controller-84d6567774-vfsmj\" (UID: \"875831c4-1354-40bb-9b36-f6f2a6b6d79c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.741154 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx9wn\" (UID: \"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.743588 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.759086 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" event={"ID":"b760848b-cfca-4339-94de-d3a74132a24f","Type":"ContainerStarted","Data":"8601d267faf6400b8dc6eeca29fc262b535831890160cebb9e97a6758b0c282d"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.759150 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" event={"ID":"b760848b-cfca-4339-94de-d3a74132a24f","Type":"ContainerStarted","Data":"af7a9db498d5fdc7932aca78792caa13866e1a93c1dfb0cb676f0f574824dfff"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.763008 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vl5w\" (UniqueName: \"kubernetes.io/projected/e34824e5-4b59-4bae-b803-39679d6eab2e-kube-api-access-5vl5w\") pod \"etcd-operator-b45778765-d2gfk\" (UID: \"e34824e5-4b59-4bae-b803-39679d6eab2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.774299 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" event={"ID":"304e7533-e5b5-4db1-8480-3b7cf9b4d58d","Type":"ContainerStarted","Data":"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.774540 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.785454 4991 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f9tdw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.785518 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.785907 4991 generic.go:334] "Generic (PLEG): container finished" podID="24730e1e-1b87-4a46-8f01-6cd71ee44f3e" containerID="2e240ef6e0e37bf892288d5e6ac923befe3565acaab1832015881396e4a6dc3b" exitCode=0 Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.786147 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" event={"ID":"24730e1e-1b87-4a46-8f01-6cd71ee44f3e","Type":"ContainerDied","Data":"2e240ef6e0e37bf892288d5e6ac923befe3565acaab1832015881396e4a6dc3b"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.787135 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8288n\" (UniqueName: \"kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n\") pod \"marketplace-operator-79b997595-hz2qk\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.788962 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" event={"ID":"95b7767c-0016-4b14-aed1-824548b4d6da","Type":"ContainerStarted","Data":"4c925bcb28a05a30d0028424dd193dd77f513744456581e41c564d58034b2784"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.789034 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" event={"ID":"95b7767c-0016-4b14-aed1-824548b4d6da","Type":"ContainerStarted","Data":"1abdafb7a3987390c666557c73ae9992631827ec58d3b523bc9fdc13ad7b3a3b"} Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.789226 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.800540 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.845890 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.847231 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.847994 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.847984 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-images\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.848822 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.851233 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jqqrh"] Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.852345 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.864525 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.866616 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.872546 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.883713 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.898678 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tqxb6"] Sep 29 09:40:05 crc kubenswrapper[4991]: W0929 09:40:05.904754 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e5a020_6f2d_4996_8af6_8f5541ce12b1.slice/crio-81fc1855339b9c07b1f80bd4064749047f96fb11d928f4a2b37e62abd66aa834 WatchSource:0}: Error finding container 81fc1855339b9c07b1f80bd4064749047f96fb11d928f4a2b37e62abd66aa834: Status 404 returned error can't find the container with id 81fc1855339b9c07b1f80bd4064749047f96fb11d928f4a2b37e62abd66aa834 Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.904914 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.904996 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjx7w\" (UniqueName: \"kubernetes.io/projected/966cac4a-795f-4f46-80c0-a788d0eda011-kube-api-access-jjx7w\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905068 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905097 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-key\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905153 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a550587-a52e-4334-ad71-25c577afaaf4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905181 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbnp5\" (UniqueName: \"kubernetes.io/projected/1cdcdd72-7bdf-4591-a4fe-9135d202369d-kube-api-access-tbnp5\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905290 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feed9982-9c7d-445a-a105-02b3169809d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905369 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggw4s\" (UniqueName: \"kubernetes.io/projected/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-kube-api-access-ggw4s\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905438 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpbm2\" (UniqueName: \"kubernetes.io/projected/3dc9e79c-6e87-45aa-8285-e9a9737c54f3-kube-api-access-cpbm2\") pod \"migrator-59844c95c7-l2tz9\" (UID: \"3dc9e79c-6e87-45aa-8285-e9a9737c54f3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905513 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a550587-a52e-4334-ad71-25c577afaaf4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905574 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905597 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feed9982-9c7d-445a-a105-02b3169809d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905619 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-cabundle\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905712 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/966cac4a-795f-4f46-80c0-a788d0eda011-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905836 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdcdd72-7bdf-4591-a4fe-9135d202369d-serving-cert\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905857 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdcdd72-7bdf-4591-a4fe-9135d202369d-config\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.905897 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklb5\" (UniqueName: \"kubernetes.io/projected/10127330-a343-4a95-99b1-ec121e70f79f-kube-api-access-dklb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.909156 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.911459 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.911502 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bvp\" (UniqueName: \"kubernetes.io/projected/66ebaf9a-4d17-415b-8e97-87e85c5b287e-kube-api-access-g5bvp\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.911586 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a550587-a52e-4334-ad71-25c577afaaf4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.911613 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10127330-a343-4a95-99b1-ec121e70f79f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.911672 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feed9982-9c7d-445a-a105-02b3169809d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.912592 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.912719 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99hx\" (UniqueName: \"kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.912780 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.912821 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklk2\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.912895 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10127330-a343-4a95-99b1-ec121e70f79f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:05 crc kubenswrapper[4991]: E0929 09:40:05.914595 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.414577856 +0000 UTC m=+142.270505884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.919714 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99hx\" (UniqueName: \"kubernetes.io/projected/34f20ab2-1d2b-4a17-923b-ad9151c86dcf-kube-api-access-g99hx\") pod \"machine-api-operator-5694c8668f-r4x2n\" (UID: \"34f20ab2-1d2b-4a17-923b-ad9151c86dcf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:05 crc kubenswrapper[4991]: W0929 09:40:05.935159 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c72b3f6_dc79_4c2c_a0e8_44c41ff60370.slice/crio-5c6cd8b308ceaad4dcd8bb5783f34e12e11c8ec6649d0b930ef0158063c5f5c9 WatchSource:0}: Error finding container 5c6cd8b308ceaad4dcd8bb5783f34e12e11c8ec6649d0b930ef0158063c5f5c9: Status 404 returned error can't find the container with id 5c6cd8b308ceaad4dcd8bb5783f34e12e11c8ec6649d0b930ef0158063c5f5c9 Sep 29 09:40:05 crc kubenswrapper[4991]: I0929 09:40:05.998104 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017326 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017709 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017777 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017805 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bvp\" (UniqueName: \"kubernetes.io/projected/66ebaf9a-4d17-415b-8e97-87e85c5b287e-kube-api-access-g5bvp\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017881 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a550587-a52e-4334-ad71-25c577afaaf4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017909 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10127330-a343-4a95-99b1-ec121e70f79f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.017979 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018046 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feed9982-9c7d-445a-a105-02b3169809d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018101 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018130 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018158 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-images\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018215 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklk2\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018304 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10127330-a343-4a95-99b1-ec121e70f79f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018350 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb79g\" (UniqueName: \"kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018392 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7b6892-79f6-40a8-ac67-becd46acea4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018466 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.018988 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.019052 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.519018039 +0000 UTC m=+142.374946067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019208 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019239 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-key\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019285 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019314 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbnp5\" (UniqueName: \"kubernetes.io/projected/1cdcdd72-7bdf-4591-a4fe-9135d202369d-kube-api-access-tbnp5\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019332 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-proxy-tls\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019374 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7b6892-79f6-40a8-ac67-becd46acea4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019405 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-node-bootstrap-token\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019455 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0374dfd2-5829-4d52-bf0d-5aaafde9b773-serving-cert\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019482 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpbm2\" (UniqueName: \"kubernetes.io/projected/3dc9e79c-6e87-45aa-8285-e9a9737c54f3-kube-api-access-cpbm2\") pod \"migrator-59844c95c7-l2tz9\" (UID: \"3dc9e79c-6e87-45aa-8285-e9a9737c54f3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019504 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019535 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019556 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feed9982-9c7d-445a-a105-02b3169809d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019576 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-csi-data-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019593 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmcr\" (UniqueName: \"kubernetes.io/projected/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-kube-api-access-rvmcr\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019614 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tppcc\" (UniqueName: \"kubernetes.io/projected/73a7f61a-69f4-4f78-8f1e-1351448b129a-kube-api-access-tppcc\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-srv-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019669 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c999k\" (UniqueName: \"kubernetes.io/projected/877cd408-2088-468f-bff8-58c539e077a9-kube-api-access-c999k\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019691 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019710 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-mountpoint-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019732 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdcdd72-7bdf-4591-a4fe-9135d202369d-serving-cert\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019750 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdcdd72-7bdf-4591-a4fe-9135d202369d-config\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019771 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-certs\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019805 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklb5\" (UniqueName: \"kubernetes.io/projected/10127330-a343-4a95-99b1-ec121e70f79f-kube-api-access-dklb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019827 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-socket-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019865 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019916 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbq8\" (UniqueName: \"kubernetes.io/projected/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-kube-api-access-dlbq8\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019941 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019975 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-apiservice-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.019993 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8975c94b-ba70-41a8-88de-e6d0f78b8a01-tmpfs\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020032 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020051 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020081 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020100 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-kube-api-access-9vln2\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020117 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs8z\" (UniqueName: \"kubernetes.io/projected/3c8ca796-c289-480d-b50b-ca8180f0da30-kube-api-access-4hs8z\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020138 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjx7w\" (UniqueName: \"kubernetes.io/projected/966cac4a-795f-4f46-80c0-a788d0eda011-kube-api-access-jjx7w\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020155 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d5954e6-eb2d-45f0-a646-0a576432f542-config-volume\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020170 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-default-certificate\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020189 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73a7f61a-69f4-4f78-8f1e-1351448b129a-service-ca-bundle\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020207 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdc03f3a-9fcb-400d-baec-f95daa142550-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020222 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-webhook-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020253 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a550587-a52e-4334-ad71-25c577afaaf4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020272 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-plugins-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020305 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc54a59-b61b-4ab8-b372-ef560a0924b2-cert\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020326 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpm7\" (UniqueName: \"kubernetes.io/projected/6d5954e6-eb2d-45f0-a646-0a576432f542-kube-api-access-2kpm7\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020348 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feed9982-9c7d-445a-a105-02b3169809d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020366 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020389 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6f97\" (UniqueName: \"kubernetes.io/projected/0374dfd2-5829-4d52-bf0d-5aaafde9b773-kube-api-access-r6f97\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020408 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020452 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggw4s\" (UniqueName: \"kubernetes.io/projected/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-kube-api-access-ggw4s\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020493 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nt4x\" (UniqueName: \"kubernetes.io/projected/fbde42bc-f184-4f94-9ba1-5f63159b1562-kube-api-access-9nt4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020511 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75478\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-kube-api-access-75478\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020542 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2hq\" (UniqueName: \"kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020568 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a550587-a52e-4334-ad71-25c577afaaf4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020584 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764zn\" (UniqueName: \"kubernetes.io/projected/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-kube-api-access-764zn\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020615 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-cabundle\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020637 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxrc\" (UniqueName: \"kubernetes.io/projected/8975c94b-ba70-41a8-88de-e6d0f78b8a01-kube-api-access-gzxrc\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020661 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-config\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020703 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde42bc-f184-4f94-9ba1-5f63159b1562-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020722 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/966cac4a-795f-4f46-80c0-a788d0eda011-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020740 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5954e6-eb2d-45f0-a646-0a576432f542-metrics-tls\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020760 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvx2r\" (UniqueName: \"kubernetes.io/projected/8cc54a59-b61b-4ab8-b372-ef560a0924b2-kube-api-access-qvx2r\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020779 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dc5x\" (UniqueName: \"kubernetes.io/projected/cdc03f3a-9fcb-400d-baec-f95daa142550-kube-api-access-6dc5x\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020798 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-registration-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020822 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020842 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020861 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde42bc-f184-4f94-9ba1-5f63159b1562-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020891 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-trusted-ca\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020909 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-stats-auth\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020925 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-metrics-certs\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.020987 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.023046 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.52303307 +0000 UTC m=+142.378961098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.026837 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.028073 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-key\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.028288 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10127330-a343-4a95-99b1-ec121e70f79f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.030357 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.031928 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feed9982-9c7d-445a-a105-02b3169809d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.031963 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdcdd72-7bdf-4591-a4fe-9135d202369d-serving-cert\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.032645 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/66ebaf9a-4d17-415b-8e97-87e85c5b287e-signing-cabundle\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.034797 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.035095 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a550587-a52e-4334-ad71-25c577afaaf4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.036140 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10127330-a343-4a95-99b1-ec121e70f79f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.036498 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdcdd72-7bdf-4591-a4fe-9135d202369d-config\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.038240 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.039537 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.039696 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.040519 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") pod \"route-controller-manager-6576b87f9c-q6v2d\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.042636 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a550587-a52e-4334-ad71-25c577afaaf4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.046791 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/966cac4a-795f-4f46-80c0-a788d0eda011-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.062502 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feed9982-9c7d-445a-a105-02b3169809d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.087061 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.088593 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bvp\" (UniqueName: \"kubernetes.io/projected/66ebaf9a-4d17-415b-8e97-87e85c5b287e-kube-api-access-g5bvp\") pod \"service-ca-9c57cc56f-7fnch\" (UID: \"66ebaf9a-4d17-415b-8e97-87e85c5b287e\") " pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.089399 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a550587-a52e-4334-ad71-25c577afaaf4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hdthv\" (UID: \"2a550587-a52e-4334-ad71-25c577afaaf4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.092027 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.112358 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.123291 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.123362 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbnp5\" (UniqueName: \"kubernetes.io/projected/1cdcdd72-7bdf-4591-a4fe-9135d202369d-kube-api-access-tbnp5\") pod \"service-ca-operator-777779d784-cpc86\" (UID: \"1cdcdd72-7bdf-4591-a4fe-9135d202369d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124042 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb79g\" (UniqueName: \"kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124139 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7b6892-79f6-40a8-ac67-becd46acea4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124168 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124227 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124280 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-proxy-tls\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.124300 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7b6892-79f6-40a8-ac67-becd46acea4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.124630 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.62460659 +0000 UTC m=+142.480534618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125239 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-node-bootstrap-token\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125374 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0374dfd2-5829-4d52-bf0d-5aaafde9b773-serving-cert\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125430 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125460 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-csi-data-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125508 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmcr\" (UniqueName: \"kubernetes.io/projected/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-kube-api-access-rvmcr\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125549 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tppcc\" (UniqueName: \"kubernetes.io/projected/73a7f61a-69f4-4f78-8f1e-1351448b129a-kube-api-access-tppcc\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125590 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-srv-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125612 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c999k\" (UniqueName: \"kubernetes.io/projected/877cd408-2088-468f-bff8-58c539e077a9-kube-api-access-c999k\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125633 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125672 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-mountpoint-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.126117 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.126143 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7b6892-79f6-40a8-ac67-becd46acea4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.129854 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-srv-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.130168 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.130667 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.131408 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-csi-data-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.132674 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-mountpoint-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.125137 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpbm2\" (UniqueName: \"kubernetes.io/projected/3dc9e79c-6e87-45aa-8285-e9a9737c54f3-kube-api-access-cpbm2\") pod \"migrator-59844c95c7-l2tz9\" (UID: \"3dc9e79c-6e87-45aa-8285-e9a9737c54f3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.133631 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.129246 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.135516 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-certs\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137240 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-socket-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137330 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbq8\" (UniqueName: \"kubernetes.io/projected/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-kube-api-access-dlbq8\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137382 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8975c94b-ba70-41a8-88de-e6d0f78b8a01-tmpfs\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137405 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-apiservice-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137443 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137475 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137522 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-kube-api-access-9vln2\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137548 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137570 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hs8z\" (UniqueName: \"kubernetes.io/projected/3c8ca796-c289-480d-b50b-ca8180f0da30-kube-api-access-4hs8z\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137614 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-default-certificate\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137651 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d5954e6-eb2d-45f0-a646-0a576432f542-config-volume\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137693 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-webhook-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.137733 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73a7f61a-69f4-4f78-8f1e-1351448b129a-service-ca-bundle\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.138522 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdc03f3a-9fcb-400d-baec-f95daa142550-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.138599 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-plugins-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.138831 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc54a59-b61b-4ab8-b372-ef560a0924b2-cert\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.138864 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpm7\" (UniqueName: \"kubernetes.io/projected/6d5954e6-eb2d-45f0-a646-0a576432f542-kube-api-access-2kpm7\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139147 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139182 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6f97\" (UniqueName: \"kubernetes.io/projected/0374dfd2-5829-4d52-bf0d-5aaafde9b773-kube-api-access-r6f97\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139463 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8975c94b-ba70-41a8-88de-e6d0f78b8a01-tmpfs\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139591 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139807 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-socket-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.139913 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75478\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-kube-api-access-75478\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.140217 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nt4x\" (UniqueName: \"kubernetes.io/projected/fbde42bc-f184-4f94-9ba1-5f63159b1562-kube-api-access-9nt4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.140246 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2hq\" (UniqueName: \"kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.140634 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764zn\" (UniqueName: \"kubernetes.io/projected/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-kube-api-access-764zn\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.140737 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-certs\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.141434 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.141499 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-plugins-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142019 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxrc\" (UniqueName: \"kubernetes.io/projected/8975c94b-ba70-41a8-88de-e6d0f78b8a01-kube-api-access-gzxrc\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142553 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142240 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0374dfd2-5829-4d52-bf0d-5aaafde9b773-serving-cert\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142037 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d5954e6-eb2d-45f0-a646-0a576432f542-config-volume\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142360 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-config\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142807 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5954e6-eb2d-45f0-a646-0a576432f542-metrics-tls\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.142838 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde42bc-f184-4f94-9ba1-5f63159b1562-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143338 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143613 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-config\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143613 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143663 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dc5x\" (UniqueName: \"kubernetes.io/projected/cdc03f3a-9fcb-400d-baec-f95daa142550-kube-api-access-6dc5x\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143688 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-registration-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143883 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73a7f61a-69f4-4f78-8f1e-1351448b129a-service-ca-bundle\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.143924 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/877cd408-2088-468f-bff8-58c539e077a9-registration-dir\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144339 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-default-certificate\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144505 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-webhook-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144661 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvx2r\" (UniqueName: \"kubernetes.io/projected/8cc54a59-b61b-4ab8-b372-ef560a0924b2-kube-api-access-qvx2r\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144710 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde42bc-f184-4f94-9ba1-5f63159b1562-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144733 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-metrics-certs\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144757 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-trusted-ca\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144774 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-stats-auth\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144798 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144817 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144854 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144880 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144897 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.144919 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-images\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.145447 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-images\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.147041 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0374dfd2-5829-4d52-bf0d-5aaafde9b773-trusted-ca\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.151441 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.152141 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.152310 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-stats-auth\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.153003 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde42bc-f184-4f94-9ba1-5f63159b1562-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.159588 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8975c94b-ba70-41a8-88de-e6d0f78b8a01-apiservice-cert\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.159809 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.159670 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cc54a59-b61b-4ab8-b372-ef560a0924b2-cert\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.160300 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73a7f61a-69f4-4f78-8f1e-1351448b129a-metrics-certs\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.160505 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.160741 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.161065 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d5954e6-eb2d-45f0-a646-0a576432f542-metrics-tls\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.161401 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbde42bc-f184-4f94-9ba1-5f63159b1562-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.165891 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.170477 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/feed9982-9c7d-445a-a105-02b3169809d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vql8m\" (UID: \"feed9982-9c7d-445a-a105-02b3169809d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.171107 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdc03f3a-9fcb-400d-baec-f95daa142550-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.172326 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3c8ca796-c289-480d-b50b-ca8180f0da30-node-bootstrap-token\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.174070 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db7b6892-79f6-40a8-ac67-becd46acea4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.175587 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-proxy-tls\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.180342 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.186192 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjx7w\" (UniqueName: \"kubernetes.io/projected/966cac4a-795f-4f46-80c0-a788d0eda011-kube-api-access-jjx7w\") pod \"multus-admission-controller-857f4d67dd-2t8m7\" (UID: \"966cac4a-795f-4f46-80c0-a788d0eda011\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.203911 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.206684 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggw4s\" (UniqueName: \"kubernetes.io/projected/990a42a8-12fc-4a01-b6ec-a1f6b324cf05-kube-api-access-ggw4s\") pod \"package-server-manager-789f6589d5-2tsv6\" (UID: \"990a42a8-12fc-4a01-b6ec-a1f6b324cf05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:06 crc kubenswrapper[4991]: W0929 09:40:06.232520 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod385e20fa_2a7a_4ab3_97e6_b59a0a58cbe3.slice/crio-95ddec8013ed22f97a4391ffe3e812c073e2eabdb482391328c441225718f742 WatchSource:0}: Error finding container 95ddec8013ed22f97a4391ffe3e812c073e2eabdb482391328c441225718f742: Status 404 returned error can't find the container with id 95ddec8013ed22f97a4391ffe3e812c073e2eabdb482391328c441225718f742 Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.237258 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklb5\" (UniqueName: \"kubernetes.io/projected/10127330-a343-4a95-99b1-ec121e70f79f-kube-api-access-dklb5\") pod \"kube-storage-version-migrator-operator-b67b599dd-zznwr\" (UID: \"10127330-a343-4a95-99b1-ec121e70f79f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.246878 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.249509 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklk2\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.252048 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.251940 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.749368716 +0000 UTC m=+142.605296924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.270408 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wcmnr"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.299098 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb79g\" (UniqueName: \"kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g\") pod \"console-f9d7485db-62rs2\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.305193 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmcr\" (UniqueName: \"kubernetes.io/projected/4462bce4-6cf5-4d65-890e-4d1c8b0523fa-kube-api-access-rvmcr\") pod \"control-plane-machine-set-operator-78cbb6b69f-ftbbj\" (UID: \"4462bce4-6cf5-4d65-890e-4d1c8b0523fa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.322268 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tppcc\" (UniqueName: \"kubernetes.io/projected/73a7f61a-69f4-4f78-8f1e-1351448b129a-kube-api-access-tppcc\") pod \"router-default-5444994796-xvz8h\" (UID: \"73a7f61a-69f4-4f78-8f1e-1351448b129a\") " pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.341589 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c999k\" (UniqueName: \"kubernetes.io/projected/877cd408-2088-468f-bff8-58c539e077a9-kube-api-access-c999k\") pod \"csi-hostpathplugin-gm965\" (UID: \"877cd408-2088-468f-bff8-58c539e077a9\") " pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.351288 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.351720 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.851699345 +0000 UTC m=+142.707627373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.354688 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.358323 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbq8\" (UniqueName: \"kubernetes.io/projected/a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1-kube-api-access-dlbq8\") pod \"machine-config-operator-74547568cd-x6kbg\" (UID: \"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.380145 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hs8z\" (UniqueName: \"kubernetes.io/projected/3c8ca796-c289-480d-b50b-ca8180f0da30-kube-api-access-4hs8z\") pod \"machine-config-server-6w94h\" (UID: \"3c8ca796-c289-480d-b50b-ca8180f0da30\") " pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.395130 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.404422 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.405512 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75478\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-kube-api-access-75478\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.413817 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.414996 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.415365 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.451684 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vln2\" (UniqueName: \"kubernetes.io/projected/31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe-kube-api-access-9vln2\") pod \"openshift-config-operator-7777fb866f-5xs8f\" (UID: \"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.452719 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.453146 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:06.953129021 +0000 UTC m=+142.809057049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.453736 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764zn\" (UniqueName: \"kubernetes.io/projected/f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1-kube-api-access-764zn\") pod \"olm-operator-6b444d44fb-5mb78\" (UID: \"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.472131 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6f97\" (UniqueName: \"kubernetes.io/projected/0374dfd2-5829-4d52-bf0d-5aaafde9b773-kube-api-access-r6f97\") pod \"console-operator-58897d9998-4v4rm\" (UID: \"0374dfd2-5829-4d52-bf0d-5aaafde9b773\") " pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.501880 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.505390 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nt4x\" (UniqueName: \"kubernetes.io/projected/fbde42bc-f184-4f94-9ba1-5f63159b1562-kube-api-access-9nt4x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpqf9\" (UID: \"fbde42bc-f184-4f94-9ba1-5f63159b1562\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.514336 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2hq\" (UniqueName: \"kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq\") pod \"collect-profiles-29318970-5b8cz\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.525599 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.531428 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.532342 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7b6892-79f6-40a8-ac67-becd46acea4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fr8gc\" (UID: \"db7b6892-79f6-40a8-ac67-becd46acea4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.540301 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpm7\" (UniqueName: \"kubernetes.io/projected/6d5954e6-eb2d-45f0-a646-0a576432f542-kube-api-access-2kpm7\") pod \"dns-default-m5krf\" (UID: \"6d5954e6-eb2d-45f0-a646-0a576432f542\") " pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.540357 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.550036 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.550526 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.554211 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.554645 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.054627758 +0000 UTC m=+142.910555786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.559699 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.568033 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.573709 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.573827 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxrc\" (UniqueName: \"kubernetes.io/projected/8975c94b-ba70-41a8-88de-e6d0f78b8a01-kube-api-access-gzxrc\") pod \"packageserver-d55dfcdfc-twv7q\" (UID: \"8975c94b-ba70-41a8-88de-e6d0f78b8a01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.582543 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" Sep 29 09:40:06 crc kubenswrapper[4991]: W0929 09:40:06.590309 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc06a05d9_0a6c_4e0a_88e8_023e829f245a.slice/crio-0f7493e4696184f1d2b34e2c02f1fcdb5faec0529c6cfae4c7eb52ff9f6b5074 WatchSource:0}: Error finding container 0f7493e4696184f1d2b34e2c02f1fcdb5faec0529c6cfae4c7eb52ff9f6b5074: Status 404 returned error can't find the container with id 0f7493e4696184f1d2b34e2c02f1fcdb5faec0529c6cfae4c7eb52ff9f6b5074 Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.591099 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dc5x\" (UniqueName: \"kubernetes.io/projected/cdc03f3a-9fcb-400d-baec-f95daa142550-kube-api-access-6dc5x\") pod \"cluster-samples-operator-665b6dd947-gfr46\" (UID: \"cdc03f3a-9fcb-400d-baec-f95daa142550\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.600764 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.601202 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2gfk"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.610279 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6w94h" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.610732 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvx2r\" (UniqueName: \"kubernetes.io/projected/8cc54a59-b61b-4ab8-b372-ef560a0924b2-kube-api-access-qvx2r\") pod \"ingress-canary-d5tqb\" (UID: \"8cc54a59-b61b-4ab8-b372-ef560a0924b2\") " pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.650233 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gm965" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.655811 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r4x2n"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.655862 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.656259 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.156242818 +0000 UTC m=+143.012170846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.762603 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.763191 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.263167953 +0000 UTC m=+143.119095981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: W0929 09:40:06.792731 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34f20ab2_1d2b_4a17_923b_ad9151c86dcf.slice/crio-53cf469071b8ff6affc8538326e6f1b621699270c8a735e60f8d4f122e6649c0 WatchSource:0}: Error finding container 53cf469071b8ff6affc8538326e6f1b621699270c8a735e60f8d4f122e6649c0: Status 404 returned error can't find the container with id 53cf469071b8ff6affc8538326e6f1b621699270c8a735e60f8d4f122e6649c0 Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.797435 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.806345 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.815336 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.815720 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jqqrh" event={"ID":"d949b064-ca4d-454e-a0d9-2aa9dd40d4e1","Type":"ContainerStarted","Data":"8e42170591d4e5ba43b2c144d2b6cdd5ac1a6d57e6431afc96a0551115b68d15"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.815753 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jqqrh" event={"ID":"d949b064-ca4d-454e-a0d9-2aa9dd40d4e1","Type":"ContainerStarted","Data":"6b17baa7a92f22e4ec32770b6cfbc8b6eb3373e2a3044af24aa7b586a6991815"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.817562 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.818735 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.818821 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.829411 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" event={"ID":"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3","Type":"ContainerStarted","Data":"95ddec8013ed22f97a4391ffe3e812c073e2eabdb482391328c441225718f742"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.841641 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7fnch"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.858031 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" event={"ID":"c06a05d9-0a6c-4e0a-88e8-023e829f245a","Type":"ContainerStarted","Data":"0f7493e4696184f1d2b34e2c02f1fcdb5faec0529c6cfae4c7eb52ff9f6b5074"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.864067 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" event={"ID":"24730e1e-1b87-4a46-8f01-6cd71ee44f3e","Type":"ContainerStarted","Data":"3d05e15667383479a40aa0ec87d70bbfb8d2cf42133b2838a7c99c769ce2fb4d"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.871247 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.871670 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.371652788 +0000 UTC m=+143.227580816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.894844 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d5tqb" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.896679 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" event={"ID":"f73e1250-00a9-4975-9b10-21b456ad2ed0","Type":"ContainerStarted","Data":"ceff440392eab874d08e72b8ec7463794d09cdb2d4e61b57c5bd7c642e6d32c8"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.916818 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.919393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" event={"ID":"875831c4-1354-40bb-9b36-f6f2a6b6d79c","Type":"ContainerStarted","Data":"e6a8da38e17526b2dba4360a1438f9bf073cf5bd696952c59c9f65878b998be5"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.941934 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" event={"ID":"e34824e5-4b59-4bae-b803-39679d6eab2e","Type":"ContainerStarted","Data":"b7de1652b7f6e5b9609498008265f806ebe274bb87b56260255190fe92c7effb"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.942019 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.942051 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" event={"ID":"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370","Type":"ContainerStarted","Data":"5c6cd8b308ceaad4dcd8bb5783f34e12e11c8ec6649d0b930ef0158063c5f5c9"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.942065 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" event={"ID":"00e00a41-260b-45b5-a54d-c18ded602aa6","Type":"ContainerStarted","Data":"f292fba7b9dfbeadaee37803acb95a823c678a961175a39b150d9ceceacbb1d9"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.942075 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" event={"ID":"00e00a41-260b-45b5-a54d-c18ded602aa6","Type":"ContainerStarted","Data":"796f903293b43b4ab720878567e63b3d37e4bcf7f014995899e0b8e6064111f2"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.942953 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" event={"ID":"c975ee4e-229f-417c-ae32-d56eae89c4c1","Type":"ContainerStarted","Data":"f1fc7ce2b96cc29cbb45b5955834e32f500c8a6cf54ad2130766da82831ef479"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.944180 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" event={"ID":"a0c1005c-2a71-47c1-8f5a-75fff673d6ce","Type":"ContainerStarted","Data":"1a8e8f8fa52e498182a97b6ce8c5988b20ff5b01d718c6748d87a91dcb4c6961"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.944201 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" event={"ID":"a0c1005c-2a71-47c1-8f5a-75fff673d6ce","Type":"ContainerStarted","Data":"7b4ca55a09b7b1f2a6aa30a7cd0e9e764825bccc503ea41a56930bdcc111b98d"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.944553 4991 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dmrh5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.944602 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.950713 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" event={"ID":"91e5a020-6f2d-4996-8af6-8f5541ce12b1","Type":"ContainerStarted","Data":"7c418c0e727e0e2995e6d4c9b4692434bb5a649c510dfe62e6a7dab7c71b79d6"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.951128 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" event={"ID":"91e5a020-6f2d-4996-8af6-8f5541ce12b1","Type":"ContainerStarted","Data":"81fc1855339b9c07b1f80bd4064749047f96fb11d928f4a2b37e62abd66aa834"} Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.972598 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.972962 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.472882019 +0000 UTC m=+143.328810047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:06 crc kubenswrapper[4991]: I0929 09:40:06.973190 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:06 crc kubenswrapper[4991]: E0929 09:40:06.974507 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.474496099 +0000 UTC m=+143.330424117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: W0929 09:40:07.008145 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a7f61a_69f4_4f78_8f1e_1351448b129a.slice/crio-1fa282c172686581158d4187c42ad691fdd0746fa36101b7fb19b5c90508f9f7 WatchSource:0}: Error finding container 1fa282c172686581158d4187c42ad691fdd0746fa36101b7fb19b5c90508f9f7: Status 404 returned error can't find the container with id 1fa282c172686581158d4187c42ad691fdd0746fa36101b7fb19b5c90508f9f7 Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.010126 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.023233 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:40:07 crc kubenswrapper[4991]: W0929 09:40:07.026557 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66ebaf9a_4d17_415b_8e97_87e85c5b287e.slice/crio-3245b474399a12c6b92293c11fb24bbc99e642443ef004db6dc10d2b559cfee8 WatchSource:0}: Error finding container 3245b474399a12c6b92293c11fb24bbc99e642443ef004db6dc10d2b559cfee8: Status 404 returned error can't find the container with id 3245b474399a12c6b92293c11fb24bbc99e642443ef004db6dc10d2b559cfee8 Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.081486 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.097531 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.59745285 +0000 UTC m=+143.453380878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.099426 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.111049 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.611022673 +0000 UTC m=+143.466950691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.145086 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.165449 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.201607 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.202063 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.702033536 +0000 UTC m=+143.557961564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.271362 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.303128 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.303564 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.803548644 +0000 UTC m=+143.659476672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.341321 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" podStartSLOduration=122.341300089 podStartE2EDuration="2m2.341300089s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:07.339211496 +0000 UTC m=+143.195139524" watchObservedRunningTime="2025-09-29 09:40:07.341300089 +0000 UTC m=+143.197228127" Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.377259 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpc86"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.411165 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.411282 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.911258989 +0000 UTC m=+143.767187017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.411708 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.412311 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:07.912292695 +0000 UTC m=+143.768220723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.414430 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2t8m7"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.439916 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" podStartSLOduration=123.439895063 podStartE2EDuration="2m3.439895063s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:07.433635825 +0000 UTC m=+143.289563853" watchObservedRunningTime="2025-09-29 09:40:07.439895063 +0000 UTC m=+143.295823091" Sep 29 09:40:07 crc kubenswrapper[4991]: W0929 09:40:07.511315 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc9e79c_6e87_45aa_8285_e9a9737c54f3.slice/crio-b9bd3521d264a7d4695ef4706df07387d7e2cd9d43027142b3d82b7d4343d6fc WatchSource:0}: Error finding container b9bd3521d264a7d4695ef4706df07387d7e2cd9d43027142b3d82b7d4343d6fc: Status 404 returned error can't find the container with id b9bd3521d264a7d4695ef4706df07387d7e2cd9d43027142b3d82b7d4343d6fc Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.512779 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.514200 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.014147762 +0000 UTC m=+143.870075790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.514378 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.514810 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.014801538 +0000 UTC m=+143.870729556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: W0929 09:40:07.571700 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a550587_a52e_4334_ad71_25c577afaaf4.slice/crio-8cfbac1b67703baed1254b4fd2755cf99bf4e7974245ffa6a2856545ca202738 WatchSource:0}: Error finding container 8cfbac1b67703baed1254b4fd2755cf99bf4e7974245ffa6a2856545ca202738: Status 404 returned error can't find the container with id 8cfbac1b67703baed1254b4fd2755cf99bf4e7974245ffa6a2856545ca202738 Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.573383 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.618449 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.618877 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.118854491 +0000 UTC m=+143.974782519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.628788 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" podStartSLOduration=122.628755091 podStartE2EDuration="2m2.628755091s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:07.623524529 +0000 UTC m=+143.479452557" watchObservedRunningTime="2025-09-29 09:40:07.628755091 +0000 UTC m=+143.484683119" Sep 29 09:40:07 crc kubenswrapper[4991]: W0929 09:40:07.673752 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod990a42a8_12fc_4a01_b6ec_a1f6b324cf05.slice/crio-286ed8c06f77a7c3954171fc7ffd296fc7802f64090345650ebce5b42a03020e WatchSource:0}: Error finding container 286ed8c06f77a7c3954171fc7ffd296fc7802f64090345650ebce5b42a03020e: Status 404 returned error can't find the container with id 286ed8c06f77a7c3954171fc7ffd296fc7802f64090345650ebce5b42a03020e Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.692070 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.706873 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.713973 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gm965"] Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.719984 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.720653 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.220634605 +0000 UTC m=+144.076562633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.803845 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2k9w4" podStartSLOduration=123.80381782 podStartE2EDuration="2m3.80381782s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:07.801640445 +0000 UTC m=+143.657568483" watchObservedRunningTime="2025-09-29 09:40:07.80381782 +0000 UTC m=+143.659745848" Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.821791 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.822572 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.322526443 +0000 UTC m=+144.178454471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.909552 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-78v2g" podStartSLOduration=123.909532394 podStartE2EDuration="2m3.909532394s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:07.843562235 +0000 UTC m=+143.699490263" watchObservedRunningTime="2025-09-29 09:40:07.909532394 +0000 UTC m=+143.765460412" Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.924283 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:07 crc kubenswrapper[4991]: E0929 09:40:07.924774 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.424750589 +0000 UTC m=+144.280678617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.952101 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:40:07 crc kubenswrapper[4991]: I0929 09:40:07.952158 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.004284 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" event={"ID":"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1","Type":"ContainerStarted","Data":"0a643627b7492c51975b3932e352177207afe97c1974c9e56b01ba1222724d91"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.016774 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" event={"ID":"2a550587-a52e-4334-ad71-25c577afaaf4","Type":"ContainerStarted","Data":"8cfbac1b67703baed1254b4fd2755cf99bf4e7974245ffa6a2856545ca202738"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.018030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" event={"ID":"990a42a8-12fc-4a01-b6ec-a1f6b324cf05","Type":"ContainerStarted","Data":"286ed8c06f77a7c3954171fc7ffd296fc7802f64090345650ebce5b42a03020e"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.026011 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" event={"ID":"875831c4-1354-40bb-9b36-f6f2a6b6d79c","Type":"ContainerStarted","Data":"6d3971b0fc4247d8b5c20805601d703a7cbbd21430d5d2cc3a92d2a51623bbe5"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.026707 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.029482 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.529449488 +0000 UTC m=+144.385377506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.035376 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6w94h" event={"ID":"3c8ca796-c289-480d-b50b-ca8180f0da30","Type":"ContainerStarted","Data":"663947016acea0a79b6084aea828e6c169f7b631414fbe54be76044d7635f762"} Sep 29 09:40:08 crc kubenswrapper[4991]: W0929 09:40:08.043881 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod877cd408_2088_468f_bff8_58c539e077a9.slice/crio-1b7df482bc4637302c71d2d637936d68bbc402d6116c270c6bb48c49e6ded183 WatchSource:0}: Error finding container 1b7df482bc4637302c71d2d637936d68bbc402d6116c270c6bb48c49e6ded183: Status 404 returned error can't find the container with id 1b7df482bc4637302c71d2d637936d68bbc402d6116c270c6bb48c49e6ded183 Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.068837 4991 generic.go:334] "Generic (PLEG): container finished" podID="7c72b3f6-dc79-4c2c-a0e8-44c41ff60370" containerID="938265afaee1898230577eb3a5bac437562b0afd38a4f6eb3d0705771916a87f" exitCode=0 Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.068991 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" event={"ID":"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370","Type":"ContainerDied","Data":"938265afaee1898230577eb3a5bac437562b0afd38a4f6eb3d0705771916a87f"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.091102 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" event={"ID":"91e5a020-6f2d-4996-8af6-8f5541ce12b1","Type":"ContainerStarted","Data":"bb33f8fc00b1ee5e9fd4833f0383c759b2c927cafadca66cdf4e9fe6611cad69"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.099869 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jqqrh" podStartSLOduration=123.099847189 podStartE2EDuration="2m3.099847189s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.098488054 +0000 UTC m=+143.954416082" watchObservedRunningTime="2025-09-29 09:40:08.099847189 +0000 UTC m=+143.955775217" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.131880 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.132334 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.63231909 +0000 UTC m=+144.488247118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.150305 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bsdb4" podStartSLOduration=123.150278735 podStartE2EDuration="2m3.150278735s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.142293693 +0000 UTC m=+143.998221741" watchObservedRunningTime="2025-09-29 09:40:08.150278735 +0000 UTC m=+144.006206763" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.174797 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xvz8h" event={"ID":"73a7f61a-69f4-4f78-8f1e-1351448b129a","Type":"ContainerStarted","Data":"1fa282c172686581158d4187c42ad691fdd0746fa36101b7fb19b5c90508f9f7"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.194577 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" event={"ID":"34f20ab2-1d2b-4a17-923b-ad9151c86dcf","Type":"ContainerStarted","Data":"53cf469071b8ff6affc8538326e6f1b621699270c8a735e60f8d4f122e6649c0"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.200282 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.234059 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.236041 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.736010444 +0000 UTC m=+144.591938472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.253189 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4v4rm"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.263233 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" event={"ID":"feed9982-9c7d-445a-a105-02b3169809d1","Type":"ContainerStarted","Data":"cf48699699c6f679ce7e7090645432ed77fcfe75bf0d931df07cd5bfdc8ac23d"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.264589 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" event={"ID":"385e20fa-2a7a-4ab3-97e6-b59a0a58cbe3","Type":"ContainerStarted","Data":"4063468e8ff0276e662129afc45ced923992c54e235880d23338522cb3c35079"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.272086 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" event={"ID":"966cac4a-795f-4f46-80c0-a788d0eda011","Type":"ContainerStarted","Data":"fe6f743ba17d9964c1a5595329533472968311d41a272d09b912d94725eb0583"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.275753 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" event={"ID":"66ebaf9a-4d17-415b-8e97-87e85c5b287e","Type":"ContainerStarted","Data":"3245b474399a12c6b92293c11fb24bbc99e642443ef004db6dc10d2b559cfee8"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.279275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" event={"ID":"259d860c-d08f-4753-9e8b-f059fea942f5","Type":"ContainerStarted","Data":"8ea788aaa0b3d480d665da9101a54e8da321132758dfc0ce0096ae87736a9f70"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.280693 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.285856 4991 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q6v2d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.285923 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.286548 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.312839 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" event={"ID":"f73e1250-00a9-4975-9b10-21b456ad2ed0","Type":"ContainerStarted","Data":"ecd77ed80ca26e3f2d0d8c5c9e8c53f848e0cc6c3a24f5ff7f911cfbe34ce48d"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.312909 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.316629 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" event={"ID":"3dc9e79c-6e87-45aa-8285-e9a9737c54f3","Type":"ContainerStarted","Data":"b9bd3521d264a7d4695ef4706df07387d7e2cd9d43027142b3d82b7d4343d6fc"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.325001 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d5tqb"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.328488 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" event={"ID":"c975ee4e-229f-417c-ae32-d56eae89c4c1","Type":"ContainerStarted","Data":"98206bcd80156edbe12d1b3fe575b7258e57e4099aa09ee4fc4b2aad5920a233"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.339496 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.340039 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.840026575 +0000 UTC m=+144.695954603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.343083 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" event={"ID":"1cdcdd72-7bdf-4591-a4fe-9135d202369d","Type":"ContainerStarted","Data":"ad7f6b442e50070e54166d91722f38fd5454f487f03c2fbf919be996c907bf8d"} Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.380846 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.381888 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.381935 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.394412 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.407479 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.423477 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.441332 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.444260 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.465770 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:08.965685654 +0000 UTC m=+144.821613682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.469067 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m5krf"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.471210 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.491149 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj"] Sep 29 09:40:08 crc kubenswrapper[4991]: W0929 09:40:08.516299 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0374dfd2_5829_4d52_bf0d_5aaafde9b773.slice/crio-6a3715caab294d5d9ed1d90995b2cc88bc1382261ad961817b26401844580e33 WatchSource:0}: Error finding container 6a3715caab294d5d9ed1d90995b2cc88bc1382261ad961817b26401844580e33: Status 404 returned error can't find the container with id 6a3715caab294d5d9ed1d90995b2cc88bc1382261ad961817b26401844580e33 Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.537593 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9"] Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.551520 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.554807 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.554885 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.566509 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.566934 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.066918175 +0000 UTC m=+144.922846203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: W0929 09:40:08.608375 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d5954e6_eb2d_45f0_a646_0a576432f542.slice/crio-e169ebd0e34d5b794f71a66b94e3f1fba14b84739467fe1184bdbcaa5082354f WatchSource:0}: Error finding container e169ebd0e34d5b794f71a66b94e3f1fba14b84739467fe1184bdbcaa5082354f: Status 404 returned error can't find the container with id e169ebd0e34d5b794f71a66b94e3f1fba14b84739467fe1184bdbcaa5082354f Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.628065 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" podStartSLOduration=123.628032851 podStartE2EDuration="2m3.628032851s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.62641437 +0000 UTC m=+144.482342408" watchObservedRunningTime="2025-09-29 09:40:08.628032851 +0000 UTC m=+144.483960879" Sep 29 09:40:08 crc kubenswrapper[4991]: W0929 09:40:08.653146 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbde42bc_f184_4f94_9ba1_5f63159b1562.slice/crio-99f2ceb18cbd97b5d8000d692697546535d7a5d6c90d99d0141687d1df6e8c40 WatchSource:0}: Error finding container 99f2ceb18cbd97b5d8000d692697546535d7a5d6c90d99d0141687d1df6e8c40: Status 404 returned error can't find the container with id 99f2ceb18cbd97b5d8000d692697546535d7a5d6c90d99d0141687d1df6e8c40 Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.667071 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.667462 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.167415057 +0000 UTC m=+145.023343085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.667641 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.668303 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.16829319 +0000 UTC m=+145.024221208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.681357 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kv7nf" podStartSLOduration=124.681329739 podStartE2EDuration="2m4.681329739s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.679189195 +0000 UTC m=+144.535117223" watchObservedRunningTime="2025-09-29 09:40:08.681329739 +0000 UTC m=+144.537257767" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.764492 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7wzsh" podStartSLOduration=123.764460072 podStartE2EDuration="2m3.764460072s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.762066772 +0000 UTC m=+144.617994800" watchObservedRunningTime="2025-09-29 09:40:08.764460072 +0000 UTC m=+144.620388110" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.770616 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.771259 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.271238034 +0000 UTC m=+145.127166062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.824223 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx9wn" podStartSLOduration=123.824193784 podStartE2EDuration="2m3.824193784s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.823521397 +0000 UTC m=+144.679449425" watchObservedRunningTime="2025-09-29 09:40:08.824193784 +0000 UTC m=+144.680121812" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.824417 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xvz8h" podStartSLOduration=123.824413709 podStartE2EDuration="2m3.824413709s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:08.798238237 +0000 UTC m=+144.654166475" watchObservedRunningTime="2025-09-29 09:40:08.824413709 +0000 UTC m=+144.680341737" Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.872306 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.872794 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.372769992 +0000 UTC m=+145.228698020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:08 crc kubenswrapper[4991]: I0929 09:40:08.987804 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:08 crc kubenswrapper[4991]: E0929 09:40:08.988757 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.488734686 +0000 UTC m=+145.344662714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.090060 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.090553 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.590534141 +0000 UTC m=+145.446462169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.190730 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.191313 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.69129009 +0000 UTC m=+145.547218118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.293846 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.294523 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.794497981 +0000 UTC m=+145.650426009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.370974 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" event={"ID":"5519a044-7a20-4a2a-ab24-7b09bfdf59bd","Type":"ContainerStarted","Data":"f15e7652e1760a553a30e6ec4bebd12fb707aff012d5e864e17af8beecb4f8ce"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.375040 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" event={"ID":"0374dfd2-5829-4d52-bf0d-5aaafde9b773","Type":"ContainerStarted","Data":"6a3715caab294d5d9ed1d90995b2cc88bc1382261ad961817b26401844580e33"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.377557 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" event={"ID":"e34824e5-4b59-4bae-b803-39679d6eab2e","Type":"ContainerStarted","Data":"c98c018085ea3f2ffc97f7f9ff274702e3d66d1b7250060143699b62ec2957af"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.395882 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.396132 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.896097792 +0000 UTC m=+145.752025810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.396218 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.396890 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.896883402 +0000 UTC m=+145.752811430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.400108 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" event={"ID":"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370","Type":"ContainerStarted","Data":"6e738aebc57e6311fc0936ca5ae82f5db27af46d9a040ee5f8f421d569908938"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.409318 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-62rs2" event={"ID":"011674e5-30e1-4632-aee7-d0dbb06e6824","Type":"ContainerStarted","Data":"d659fb45b37c8e147e481f1da0f937520f3514c071d130486b2c751ea2fbcaf1"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.410122 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d2gfk" podStartSLOduration=124.410097476 podStartE2EDuration="2m4.410097476s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.409711996 +0000 UTC m=+145.265640024" watchObservedRunningTime="2025-09-29 09:40:09.410097476 +0000 UTC m=+145.266025504" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.416745 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.416831 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.420294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d5tqb" event={"ID":"8cc54a59-b61b-4ab8-b372-ef560a0924b2","Type":"ContainerStarted","Data":"d4c343f8dfd1e99efc117b8dd39225ac646fe0fd48d89a7f67a3b1706f7bbca4"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.429875 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.432522 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" event={"ID":"c975ee4e-229f-417c-ae32-d56eae89c4c1","Type":"ContainerStarted","Data":"530bcb9f3489887784798cd3468397afa64d8d506060227f7a7bff99ec763e68"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.443263 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" event={"ID":"c06a05d9-0a6c-4e0a-88e8-023e829f245a","Type":"ContainerStarted","Data":"e219100c44ef70245302ef41cf1acbe18226535f9c2686e1312b56dc43aa4d6c"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.444297 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.447881 4991 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hz2qk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.447927 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.452296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" event={"ID":"fbde42bc-f184-4f94-9ba1-5f63159b1562","Type":"ContainerStarted","Data":"99f2ceb18cbd97b5d8000d692697546535d7a5d6c90d99d0141687d1df6e8c40"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.454713 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" event={"ID":"875831c4-1354-40bb-9b36-f6f2a6b6d79c","Type":"ContainerStarted","Data":"c37087c724486f7f6e58d58b13ae8376e4b4b4704f212173b0b76b4d06be2dd8"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.456088 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gm965" event={"ID":"877cd408-2088-468f-bff8-58c539e077a9","Type":"ContainerStarted","Data":"1b7df482bc4637302c71d2d637936d68bbc402d6116c270c6bb48c49e6ded183"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.468296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xvz8h" event={"ID":"73a7f61a-69f4-4f78-8f1e-1351448b129a","Type":"ContainerStarted","Data":"fc1b53a23d6609ec73aa150364d77ca63c141826b91994fac1213d5b8ecbbe99"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.475639 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" event={"ID":"3dc9e79c-6e87-45aa-8285-e9a9737c54f3","Type":"ContainerStarted","Data":"6b948030132d11a0b5fb7907891e7b27e20caf5d19693c6258d4482348dbeae8"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.497831 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.499600 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:09.999565989 +0000 UTC m=+145.855494017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.504242 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" event={"ID":"34f20ab2-1d2b-4a17-923b-ad9151c86dcf","Type":"ContainerStarted","Data":"8449796554bb5c547ed25122d7c9dda231905302dcaed55bca94c0bfdb1b9f12"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.553152 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.553492 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.589056 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" event={"ID":"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe","Type":"ContainerStarted","Data":"db42ff740bbe729b16d453b9a646c19becc406acc8a75fce41d5df4d03e629df"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.602069 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.603986 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.103840897 +0000 UTC m=+145.959768925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.624900 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" event={"ID":"cdc03f3a-9fcb-400d-baec-f95daa142550","Type":"ContainerStarted","Data":"db6bcf16d5eb32fabf41e852fd9f3265b5e7528437aea4af677e8d1f3096b1a6"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.660353 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" event={"ID":"259d860c-d08f-4753-9e8b-f059fea942f5","Type":"ContainerStarted","Data":"71cea7e136b315a0dc5ed00046b2fa0d482b54a71a8a00d022ec5bc110e7ae20"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.668284 4991 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q6v2d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.668557 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.673685 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m5krf" event={"ID":"6d5954e6-eb2d-45f0-a646-0a576432f542","Type":"ContainerStarted","Data":"e169ebd0e34d5b794f71a66b94e3f1fba14b84739467fe1184bdbcaa5082354f"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.683449 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6w94h" event={"ID":"3c8ca796-c289-480d-b50b-ca8180f0da30","Type":"ContainerStarted","Data":"32dd914c60bf2ccfff9d344e780fffe94487216036c7bdf1fa154422da6af9a7"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.696737 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" event={"ID":"966cac4a-795f-4f46-80c0-a788d0eda011","Type":"ContainerStarted","Data":"3ef5425c3e55e4ece06c73f1079bef5f12344565f54c29038bb974d3d7cfd8c5"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.708866 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.709433 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.209406988 +0000 UTC m=+146.065335016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.717319 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" event={"ID":"4462bce4-6cf5-4d65-890e-4d1c8b0523fa","Type":"ContainerStarted","Data":"e817a05311e8e12f94a643e615e8c529c80d8494f9ee11dc489e1265a5af9fce"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.728716 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" podStartSLOduration=124.728683096 podStartE2EDuration="2m4.728683096s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.499940859 +0000 UTC m=+145.355868897" watchObservedRunningTime="2025-09-29 09:40:09.728683096 +0000 UTC m=+145.584611124" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.729131 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6w94h" podStartSLOduration=6.729125087 podStartE2EDuration="6.729125087s" podCreationTimestamp="2025-09-29 09:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.728603874 +0000 UTC m=+145.584531922" watchObservedRunningTime="2025-09-29 09:40:09.729125087 +0000 UTC m=+145.585053125" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.729337 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" event={"ID":"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1","Type":"ContainerStarted","Data":"8c1c8d604e3b4f5334b18703fbb2fba3b046d3be63b4d8522a8f07d6a0b9c6d6"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.777888 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" event={"ID":"66ebaf9a-4d17-415b-8e97-87e85c5b287e","Type":"ContainerStarted","Data":"39e5d93d40ce8b2a5e9070a6dc69ec294499d5f5e8cf82d6d033a3895691940a"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.806184 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" event={"ID":"1cdcdd72-7bdf-4591-a4fe-9135d202369d","Type":"ContainerStarted","Data":"5fd3c73d096a8d2935f6e07eed87ad15d8ba41cd68a790274603e4a9b57a6cf6"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.809050 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7fnch" podStartSLOduration=124.809025178 podStartE2EDuration="2m4.809025178s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.805423627 +0000 UTC m=+145.661351645" watchObservedRunningTime="2025-09-29 09:40:09.809025178 +0000 UTC m=+145.664953196" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.812273 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.813827 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.313806789 +0000 UTC m=+146.169734817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.841965 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpc86" podStartSLOduration=124.84192881 podStartE2EDuration="2m4.84192881s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.841362586 +0000 UTC m=+145.697290614" watchObservedRunningTime="2025-09-29 09:40:09.84192881 +0000 UTC m=+145.697856838" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.846842 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" event={"ID":"2a550587-a52e-4334-ad71-25c577afaaf4","Type":"ContainerStarted","Data":"7e02472c65cfe7a045aea41ff1b98f8cee44b7b50e46783d1438c30b4eb47acb"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.888161 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hdthv" podStartSLOduration=124.888139499 podStartE2EDuration="2m4.888139499s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:09.887857062 +0000 UTC m=+145.743785090" watchObservedRunningTime="2025-09-29 09:40:09.888139499 +0000 UTC m=+145.744067527" Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.889158 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" event={"ID":"990a42a8-12fc-4a01-b6ec-a1f6b324cf05","Type":"ContainerStarted","Data":"24a67a8fba99db8c6f2899b9af4c2fce6328a5371f339e8c9c8fc0b44f8e5cee"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.917906 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.918077 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.418042646 +0000 UTC m=+146.273970674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.918297 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:09 crc kubenswrapper[4991]: E0929 09:40:09.921868 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.42175187 +0000 UTC m=+146.277679898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.927275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" event={"ID":"db7b6892-79f6-40a8-ac67-becd46acea4b","Type":"ContainerStarted","Data":"a24bb71169e713ac468defbfbc7eef1202e3ba551327772765506c59f5624bb4"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.960890 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" event={"ID":"8975c94b-ba70-41a8-88de-e6d0f78b8a01","Type":"ContainerStarted","Data":"f393d45e84332fef984a9825402b1b5e545ef2d4f6e14202a076a8c27a00ced3"} Sep 29 09:40:09 crc kubenswrapper[4991]: I0929 09:40:09.996590 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" event={"ID":"10127330-a343-4a95-99b1-ec121e70f79f","Type":"ContainerStarted","Data":"3cabe3acc2322ae7099e8200b27585f2f931ca23fe53adce54ae1830ef1081e0"} Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.000090 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.000161 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.020367 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.020806 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.520786265 +0000 UTC m=+146.376714293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.024238 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6zph" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.096728 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" podStartSLOduration=125.096692515 podStartE2EDuration="2m5.096692515s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:10.028748597 +0000 UTC m=+145.884676625" watchObservedRunningTime="2025-09-29 09:40:10.096692515 +0000 UTC m=+145.952620543" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.112849 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.124319 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.137759 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.637729643 +0000 UTC m=+146.493657671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.147873 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.167252 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.178002 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.226964 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.227355 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.72733953 +0000 UTC m=+146.583267548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.293292 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.295155 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.300307 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.318729 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.338915 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.339006 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.339051 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.339096 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9chx\" (UniqueName: \"kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.339458 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.839444566 +0000 UTC m=+146.695372584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440044 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440305 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440365 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440442 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440499 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9chx\" (UniqueName: \"kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440525 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmg6n\" (UniqueName: \"kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.440565 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.440701 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:10.940679667 +0000 UTC m=+146.796607695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.441222 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.441500 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.454795 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.456089 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.481861 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.542446 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.542865 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxw79\" (UniqueName: \"kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.542892 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.542910 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.542982 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.543022 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.543071 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmg6n\" (UniqueName: \"kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.543734 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.543996 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.544348 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.044331199 +0000 UTC m=+146.900259227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.547022 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9chx\" (UniqueName: \"kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx\") pod \"certified-operators-v7dpv\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.563631 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:10 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:10 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:10 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.563715 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.566862 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.640122 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmg6n\" (UniqueName: \"kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n\") pod \"community-operators-dt2jf\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.644314 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.644654 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxw79\" (UniqueName: \"kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.644694 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.644772 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.645379 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.645472 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.145450557 +0000 UTC m=+147.001378585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.646087 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.668929 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.732295 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.733391 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.749744 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.750261 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.250244578 +0000 UTC m=+147.106172606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.768649 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxw79\" (UniqueName: \"kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79\") pod \"certified-operators-hfcp2\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.770913 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.807627 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.853130 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.853357 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.353335796 +0000 UTC m=+147.209263824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.853578 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.853621 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.853664 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.853700 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnsz\" (UniqueName: \"kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.854074 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.354064495 +0000 UTC m=+147.209992523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.957708 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.958051 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.958097 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnsz\" (UniqueName: \"kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.958225 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.958690 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: I0929 09:40:10.959007 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:10 crc kubenswrapper[4991]: E0929 09:40:10.959049 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.45901441 +0000 UTC m=+147.314942438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.033670 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnsz\" (UniqueName: \"kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz\") pod \"community-operators-t8fnv\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.060444 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.060763 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.560751733 +0000 UTC m=+147.416679761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.081449 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" event={"ID":"cdc03f3a-9fcb-400d-baec-f95daa142550","Type":"ContainerStarted","Data":"465a9ed5d51be1ad8a8a738ab26fe94c966674adea446a681ec68afbb6ed3da0"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.081518 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" event={"ID":"cdc03f3a-9fcb-400d-baec-f95daa142550","Type":"ContainerStarted","Data":"d152c54bb4c5406dc05a100db76d2ed0f8f3efaba20209d2bd9c019c40adce35"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.112723 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.138437 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" event={"ID":"0374dfd2-5829-4d52-bf0d-5aaafde9b773","Type":"ContainerStarted","Data":"430df0ee796e926fc62d58646e9cec7e145222476ef738bacd9e25411113b98b"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.140428 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.143218 4991 patch_prober.go:28] interesting pod/console-operator-58897d9998-4v4rm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.143315 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" podUID="0374dfd2-5829-4d52-bf0d-5aaafde9b773" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/readyz\": dial tcp 10.217.0.37:8443: connect: connection refused" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.161812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.162064 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.662025105 +0000 UTC m=+147.517953134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.162408 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.162982 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.662941009 +0000 UTC m=+147.518869037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.170355 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zznwr" event={"ID":"10127330-a343-4a95-99b1-ec121e70f79f","Type":"ContainerStarted","Data":"e7b0b9e8c1271fe07377ef2fbce934a9e4399c2aa76d2eb1418a929727d06424"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.175912 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" podStartSLOduration=126.175888816 podStartE2EDuration="2m6.175888816s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.175172648 +0000 UTC m=+147.031100676" watchObservedRunningTime="2025-09-29 09:40:11.175888816 +0000 UTC m=+147.031816854" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.176327 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfr46" podStartSLOduration=126.176317437 podStartE2EDuration="2m6.176317437s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.130782365 +0000 UTC m=+146.986710393" watchObservedRunningTime="2025-09-29 09:40:11.176317437 +0000 UTC m=+147.032245475" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.192398 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" event={"ID":"fbde42bc-f184-4f94-9ba1-5f63159b1562","Type":"ContainerStarted","Data":"77dfd887cfa6822f31c1aa9ca23599329f7772512343426b3d60ac57b734fb5d"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.237057 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d5tqb" event={"ID":"8cc54a59-b61b-4ab8-b372-ef560a0924b2","Type":"ContainerStarted","Data":"910ef7741e64289337ea1d2e41b2c9b9b3102d76920f33299c07800f7b411d43"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.249512 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpqf9" podStartSLOduration=126.249491768 podStartE2EDuration="2m6.249491768s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.24285166 +0000 UTC m=+147.098779708" watchObservedRunningTime="2025-09-29 09:40:11.249491768 +0000 UTC m=+147.105419796" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.293733 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.295881 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.795851841 +0000 UTC m=+147.651779869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.323472 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" event={"ID":"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1","Type":"ContainerStarted","Data":"a45d9eb64807fdaaa076a6b8353bec9e9e00da823034b898504cbacc14af5cf4"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.323553 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" event={"ID":"a5afe5c5-c08a-49c5-9c89-ffe29e5ec0a1","Type":"ContainerStarted","Data":"090e23137b181010b41624a7636d664be637b267c17ce1e6d1f37a68b390f327"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.331966 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d5tqb" podStartSLOduration=8.331923294 podStartE2EDuration="8.331923294s" podCreationTimestamp="2025-09-29 09:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.325193883 +0000 UTC m=+147.181121911" watchObservedRunningTime="2025-09-29 09:40:11.331923294 +0000 UTC m=+147.187851322" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.399114 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.402727 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" event={"ID":"3dc9e79c-6e87-45aa-8285-e9a9737c54f3","Type":"ContainerStarted","Data":"ceef7ff812c47cea6a62cf485c44c38b608ae256a924ed44cc7acb1a4333c71e"} Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.417043 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:11.900732024 +0000 UTC m=+147.756660052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.421985 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x6kbg" podStartSLOduration=126.421963361 podStartE2EDuration="2m6.421963361s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.421015988 +0000 UTC m=+147.276944016" watchObservedRunningTime="2025-09-29 09:40:11.421963361 +0000 UTC m=+147.277891389" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.440984 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" event={"ID":"966cac4a-795f-4f46-80c0-a788d0eda011","Type":"ContainerStarted","Data":"4556b43f96f30fcd7737ba55a58f27b0faabaea85c27b52054a42effcf46b4b2"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.459974 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" event={"ID":"f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1","Type":"ContainerStarted","Data":"344c23dde508827062e328c51e006250e788b949259f4460ab04bb6315500eda"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.461226 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.473255 4991 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5mb78 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.473328 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" podUID="f8cdd615-3c27-4c6e-bcd0-24c19d9db1b1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.474556 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l2tz9" podStartSLOduration=126.474532211 podStartE2EDuration="2m6.474532211s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.473932246 +0000 UTC m=+147.329860274" watchObservedRunningTime="2025-09-29 09:40:11.474532211 +0000 UTC m=+147.330460239" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.485057 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" event={"ID":"feed9982-9c7d-445a-a105-02b3169809d1","Type":"ContainerStarted","Data":"a50e45156c7191bbe36da73955a4152c24a69f7e08f36dfa05bc377976b3b5a7"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.500023 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.500194 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.00015779 +0000 UTC m=+147.856085818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.500493 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.502514 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" event={"ID":"db7b6892-79f6-40a8-ac67-becd46acea4b","Type":"ContainerStarted","Data":"301b141ac4115930a6c4dfa9a4cc45bf31f0943e5233c88b82a23ba544f2e708"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.502576 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" event={"ID":"db7b6892-79f6-40a8-ac67-becd46acea4b","Type":"ContainerStarted","Data":"64c203b23b83cf692c07e2191c094a280f9c456ebce86b4aca91f1cb668f248d"} Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.504613 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.004596162 +0000 UTC m=+147.860524190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.533246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" event={"ID":"34f20ab2-1d2b-4a17-923b-ad9151c86dcf","Type":"ContainerStarted","Data":"b58c5b8acd0a5a1152598ae0303841a366472771fd6b62302796ea5df00139e0"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.537706 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.562021 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" podStartSLOduration=126.561996094 podStartE2EDuration="2m6.561996094s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.560539837 +0000 UTC m=+147.416467865" watchObservedRunningTime="2025-09-29 09:40:11.561996094 +0000 UTC m=+147.417924122" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.562787 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m5krf" event={"ID":"6d5954e6-eb2d-45f0-a646-0a576432f542","Type":"ContainerStarted","Data":"7d822b6c7ce3f4b884ea4329e9dc1389fb05cd319a2f08e113302094375f55f7"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.563316 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.589872 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:11 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:11 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:11 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.589983 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.606118 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.606603 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.106577712 +0000 UTC m=+147.962505740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.606762 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.621262 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.121232543 +0000 UTC m=+147.977160571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.622247 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" event={"ID":"5519a044-7a20-4a2a-ab24-7b09bfdf59bd","Type":"ContainerStarted","Data":"8f85b790dcf39eefb4516bebb8a8baa60dba324fc18fcb1a8dc673ea316c4c02"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.632641 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2t8m7" podStartSLOduration=126.632619551 podStartE2EDuration="2m6.632619551s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.630047786 +0000 UTC m=+147.485975834" watchObservedRunningTime="2025-09-29 09:40:11.632619551 +0000 UTC m=+147.488547579" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.650770 4991 generic.go:334] "Generic (PLEG): container finished" podID="31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe" containerID="541fc133e722905d31d40881bd9d5fe314e24d9a8668f0f707b3bcb79f5ef5cc" exitCode=0 Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.650941 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" event={"ID":"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe","Type":"ContainerDied","Data":"541fc133e722905d31d40881bd9d5fe314e24d9a8668f0f707b3bcb79f5ef5cc"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.696883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" event={"ID":"990a42a8-12fc-4a01-b6ec-a1f6b324cf05","Type":"ContainerStarted","Data":"982d1f74c03ca9f10f272f69a48542e9ede30794ce8bae0ce475c97bdb3fe985"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.697770 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.708563 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.710330 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.210310966 +0000 UTC m=+148.066238984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.714750 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" event={"ID":"8975c94b-ba70-41a8-88de-e6d0f78b8a01","Type":"ContainerStarted","Data":"0857f6de21499d9d949d906a756ac3d71c2e53fa595b315bab344f13f9ceb20c"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.716014 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.721273 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-r4x2n" podStartSLOduration=126.721252003 podStartE2EDuration="2m6.721252003s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.720766801 +0000 UTC m=+147.576694829" watchObservedRunningTime="2025-09-29 09:40:11.721252003 +0000 UTC m=+147.577180031" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.722906 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fr8gc" podStartSLOduration=126.722900395 podStartE2EDuration="2m6.722900395s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.685988881 +0000 UTC m=+147.541916909" watchObservedRunningTime="2025-09-29 09:40:11.722900395 +0000 UTC m=+147.578828423" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.746843 4991 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-twv7q container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.746913 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" podUID="8975c94b-ba70-41a8-88de-e6d0f78b8a01" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.767575 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.768917 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" event={"ID":"4462bce4-6cf5-4d65-890e-4d1c8b0523fa","Type":"ContainerStarted","Data":"cecc70f82fa67b2364960ced348f97f82b1d2c49928eb24da3c9dc4f209d36d4"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.788163 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" event={"ID":"7c72b3f6-dc79-4c2c-a0e8-44c41ff60370","Type":"ContainerStarted","Data":"1b6525d154dc7fa6a5cc1b3aa22dbd8bfd85c58255761a451912ff860c64b824"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.816913 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.817654 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.317641101 +0000 UTC m=+148.173569129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.828256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-62rs2" event={"ID":"011674e5-30e1-4632-aee7-d0dbb06e6824","Type":"ContainerStarted","Data":"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29"} Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.829452 4991 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hz2qk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.829499 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.830146 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m5krf" podStartSLOduration=8.830135788 podStartE2EDuration="8.830135788s" podCreationTimestamp="2025-09-29 09:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.828331292 +0000 UTC m=+147.684259320" watchObservedRunningTime="2025-09-29 09:40:11.830135788 +0000 UTC m=+147.686063816" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.859942 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.861809 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.877252 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vql8m" podStartSLOduration=126.877236049 podStartE2EDuration="2m6.877236049s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.875258199 +0000 UTC m=+147.731186227" watchObservedRunningTime="2025-09-29 09:40:11.877236049 +0000 UTC m=+147.733164077" Sep 29 09:40:11 crc kubenswrapper[4991]: I0929 09:40:11.917861 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:11 crc kubenswrapper[4991]: E0929 09:40:11.925555 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.425527351 +0000 UTC m=+148.281455379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.021261 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.021763 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.521741505 +0000 UTC m=+148.377669533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.019987 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" podStartSLOduration=128.019929409 podStartE2EDuration="2m8.019929409s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:11.909330981 +0000 UTC m=+147.765259009" watchObservedRunningTime="2025-09-29 09:40:12.019929409 +0000 UTC m=+147.875857447" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.066087 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" podStartSLOduration=128.066067326 podStartE2EDuration="2m8.066067326s" podCreationTimestamp="2025-09-29 09:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.012892081 +0000 UTC m=+147.868820129" watchObservedRunningTime="2025-09-29 09:40:12.066067326 +0000 UTC m=+147.921995354" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.107823 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" podStartSLOduration=127.099513812 podStartE2EDuration="2m7.099513812s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.081375983 +0000 UTC m=+147.937304011" watchObservedRunningTime="2025-09-29 09:40:12.099513812 +0000 UTC m=+147.955441840" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.120054 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.124800 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.125513 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.625484889 +0000 UTC m=+148.481412917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: W0929 09:40:12.138569 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37413ae_c89d_40f3_80a9_74ae2723f9bd.slice/crio-83043ce02e716e28c7591c040dd50e40df77c4699d218e08ee71d2b5b5557090 WatchSource:0}: Error finding container 83043ce02e716e28c7591c040dd50e40df77c4699d218e08ee71d2b5b5557090: Status 404 returned error can't find the container with id 83043ce02e716e28c7591c040dd50e40df77c4699d218e08ee71d2b5b5557090 Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.187649 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-62rs2" podStartSLOduration=127.187624791 podStartE2EDuration="2m7.187624791s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.186940184 +0000 UTC m=+148.042868232" watchObservedRunningTime="2025-09-29 09:40:12.187624791 +0000 UTC m=+148.043552819" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.227866 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.228406 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.728389683 +0000 UTC m=+148.584317711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.290520 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.291623 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.297445 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.331662 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.332205 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.832183768 +0000 UTC m=+148.688111796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.349257 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" podStartSLOduration=127.34923643 podStartE2EDuration="2m7.34923643s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.341093314 +0000 UTC m=+148.197021342" watchObservedRunningTime="2025-09-29 09:40:12.34923643 +0000 UTC m=+148.205164458" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.351865 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.406714 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vfsmj" podStartSLOduration=127.406693943 podStartE2EDuration="2m7.406693943s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.40380534 +0000 UTC m=+148.259733368" watchObservedRunningTime="2025-09-29 09:40:12.406693943 +0000 UTC m=+148.262621971" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.433285 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vs2\" (UniqueName: \"kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.433383 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.433424 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.433478 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.433873 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:12.933860131 +0000 UTC m=+148.789788159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.540078 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.540372 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.540433 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.540484 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7vs2\" (UniqueName: \"kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.540865 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.040850637 +0000 UTC m=+148.896778665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.541253 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.541508 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.583139 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:12 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:12 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:12 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.583519 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.599830 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7vs2\" (UniqueName: \"kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2\") pod \"redhat-marketplace-wlq28\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.634794 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ftbbj" podStartSLOduration=127.634771753 podStartE2EDuration="2m7.634771753s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.607505574 +0000 UTC m=+148.463433602" watchObservedRunningTime="2025-09-29 09:40:12.634771753 +0000 UTC m=+148.490699781" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.635034 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wcmnr" podStartSLOduration=127.63502567 podStartE2EDuration="2m7.63502567s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:12.48323829 +0000 UTC m=+148.339166318" watchObservedRunningTime="2025-09-29 09:40:12.63502567 +0000 UTC m=+148.490953708" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.635574 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.643044 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.643591 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.143574156 +0000 UTC m=+148.999502184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.694543 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.695545 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.729833 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.755655 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.756167 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.256145984 +0000 UTC m=+149.112074012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.815845 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.830967 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.832541 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.835288 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.835524 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.860455 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.862801 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.862850 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.862878 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvh8\" (UniqueName: \"kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.862906 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.863010 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.863053 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.863103 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:12 crc kubenswrapper[4991]: E0929 09:40:12.863508 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.36349198 +0000 UTC m=+149.219419998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.868429 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.872525 4991 generic.go:334] "Generic (PLEG): container finished" podID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerID="1f75fefb829e51a370d3f184a7a79fab09b7b28f803199563cea6cdef223cd54" exitCode=0 Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.872917 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerDied","Data":"1f75fefb829e51a370d3f184a7a79fab09b7b28f803199563cea6cdef223cd54"} Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.873017 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerStarted","Data":"09612dbfeabde5cc4f9d61d9928e08d08a2ed9ce314686762deb484dbc4320b0"} Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.880049 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.880631 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.881874 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.891208 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.960821 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.974404 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.974523 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 29 09:40:12 crc kubenswrapper[4991]: I0929 09:40:12.999111 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m5krf" event={"ID":"6d5954e6-eb2d-45f0-a646-0a576432f542","Type":"ContainerStarted","Data":"43b4b6e27a2d634051fd001e3c63cc02e4b73b70626c846455bc5dc4fbab48a2"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.002416 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.003114 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.003208 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.003272 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.003321 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvh8\" (UniqueName: \"kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.003429 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.012116 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.512084259 +0000 UTC m=+149.368012287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.013284 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.025542 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.054586 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" event={"ID":"31f0c6f8-be28-4cd5-8f2a-e88dc6533bbe","Type":"ContainerStarted","Data":"a928446286ac1722c4c559570738c72ae56c1133d1ca87d9723247d40622dfec"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.055983 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.081681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvh8\" (UniqueName: \"kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8\") pod \"redhat-marketplace-bg24w\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.090966 4991 generic.go:334] "Generic (PLEG): container finished" podID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerID="33943ac7278c697a5dce13255b55a1e8e1dfafd906f7f42c6f338d3d3d43994f" exitCode=0 Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.091061 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerDied","Data":"33943ac7278c697a5dce13255b55a1e8e1dfafd906f7f42c6f338d3d3d43994f"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.091101 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerStarted","Data":"c622db4ec46d903a958408d6ac6dbdf54f2e4b588bc96311955c1a4b3f452827"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.118404 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.123510 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" podStartSLOduration=128.123474867 podStartE2EDuration="2m8.123474867s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:13.117930856 +0000 UTC m=+148.973858884" watchObservedRunningTime="2025-09-29 09:40:13.123474867 +0000 UTC m=+148.979402895" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.125245 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.125279 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.125318 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.127147 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.127343 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.627324624 +0000 UTC m=+149.483252652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.173702 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.176406 4991 generic.go:334] "Generic (PLEG): container finished" podID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerID="e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43" exitCode=0 Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.176568 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerDied","Data":"e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.176611 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerStarted","Data":"78d8a00a62bf4f53d0757b5ad183cdb73c262e4d91f4114539560da60d3eea5f"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.217719 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gm965" event={"ID":"877cd408-2088-468f-bff8-58c539e077a9","Type":"ContainerStarted","Data":"4d73c0d796b2753f4e524018da0e3cc8de51ef6122ee96cd0c3c637d2b7d57da"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.220496 4991 generic.go:334] "Generic (PLEG): container finished" podID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerID="be8a1f5553c0a6444dbfe7ea76f3acf6da090514c7972306b0f19e86d1130cec" exitCode=0 Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.224516 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerDied","Data":"be8a1f5553c0a6444dbfe7ea76f3acf6da090514c7972306b0f19e86d1130cec"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.224586 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerStarted","Data":"83043ce02e716e28c7591c040dd50e40df77c4699d218e08ee71d2b5b5557090"} Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.227401 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.231108 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.731082419 +0000 UTC m=+149.587010447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.247798 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.271380 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5mb78" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.279175 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.280701 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.291584 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.306942 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.309312 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.330207 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.330277 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xlj\" (UniqueName: \"kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.330658 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.331070 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.351061 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-twv7q" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.352743 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.852726596 +0000 UTC m=+149.708654624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.431995 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.432320 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.432348 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xlj\" (UniqueName: \"kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.432391 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.432857 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.432934 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:13.932913545 +0000 UTC m=+149.788841573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.433160 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.478927 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.492687 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xlj\" (UniqueName: \"kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj\") pod \"redhat-operators-m77cd\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.534738 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.535194 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.035178342 +0000 UTC m=+149.891106370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.583330 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:13 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:13 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:13 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.583746 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.640718 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.641315 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.141294377 +0000 UTC m=+149.997222405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.647577 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.683658 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.686111 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.695373 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.745725 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.745778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.745809 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.745879 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptnz\" (UniqueName: \"kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.746324 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.246305272 +0000 UTC m=+150.102233300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.852473 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.853156 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.853193 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.853269 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptnz\" (UniqueName: \"kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.853829 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.353806612 +0000 UTC m=+150.209734640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.854337 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.854559 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.895522 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptnz\" (UniqueName: \"kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz\") pod \"redhat-operators-tsd8p\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.956460 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:13 crc kubenswrapper[4991]: E0929 09:40:13.956871 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.456857239 +0000 UTC m=+150.312785267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:13 crc kubenswrapper[4991]: I0929 09:40:13.976874 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.066070 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.067305 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.567287432 +0000 UTC m=+150.423215461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.095031 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4v4rm" Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.186200 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.186721 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.686702863 +0000 UTC m=+150.542630891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.234490 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:40:14 crc kubenswrapper[4991]: W0929 09:40:14.279692 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c835c8_7524_481c_a79c_46a8c860df8a.slice/crio-8bb2b66fca5f0829973ee5506a5d102df5e5106c1a31daeb17ccfa5e02c92313 WatchSource:0}: Error finding container 8bb2b66fca5f0829973ee5506a5d102df5e5106c1a31daeb17ccfa5e02c92313: Status 404 returned error can't find the container with id 8bb2b66fca5f0829973ee5506a5d102df5e5106c1a31daeb17ccfa5e02c92313 Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.298380 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.299065 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.799042765 +0000 UTC m=+150.654970793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.370650 4991 generic.go:334] "Generic (PLEG): container finished" podID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerID="81435a2b96ee1a417535a4a9221e43cb186b75a9903476bee7d48d60467ad2cd" exitCode=0 Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.372046 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerDied","Data":"81435a2b96ee1a417535a4a9221e43cb186b75a9903476bee7d48d60467ad2cd"} Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.372095 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerStarted","Data":"228d977c929f3e557ef715cea139a385954e51f890bae0a8047188d81c507484"} Sep 29 09:40:14 crc kubenswrapper[4991]: W0929 09:40:14.372765 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b1c2ebd013493c1d3cd8c13302899f1149caf6bef09a450216188f711d63c523 WatchSource:0}: Error finding container b1c2ebd013493c1d3cd8c13302899f1149caf6bef09a450216188f711d63c523: Status 404 returned error can't find the container with id b1c2ebd013493c1d3cd8c13302899f1149caf6bef09a450216188f711d63c523 Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.400366 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.400737 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:14.900724038 +0000 UTC m=+150.756652066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: W0929 09:40:14.483195 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e0b5f82b427bff3f836a5933dc6bc7abf2536abf82d28ac4da965d726b531c1f WatchSource:0}: Error finding container e0b5f82b427bff3f836a5933dc6bc7abf2536abf82d28ac4da965d726b531c1f: Status 404 returned error can't find the container with id e0b5f82b427bff3f836a5933dc6bc7abf2536abf82d28ac4da965d726b531c1f Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.502275 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.502411 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.00239178 +0000 UTC m=+150.858319808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.502531 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.504015 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.004006561 +0000 UTC m=+150.859934589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.537750 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.555966 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:14 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:14 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:14 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.556027 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.573080 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.603848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.604912 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.104884433 +0000 UTC m=+150.960812461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.708380 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.708843 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.208826402 +0000 UTC m=+151.064754430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.789052 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.819459 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.819829 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.31981241 +0000 UTC m=+151.175740438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:14 crc kubenswrapper[4991]: I0929 09:40:14.922263 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:14 crc kubenswrapper[4991]: E0929 09:40:14.923033 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.422916148 +0000 UTC m=+151.278844166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.030644 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.031212 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.531189988 +0000 UTC m=+151.387118016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.132501 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.133182 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.633165717 +0000 UTC m=+151.489093735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.233632 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.233929 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.733893386 +0000 UTC m=+151.589821414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.234304 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.234767 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.734749267 +0000 UTC m=+151.590677295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.339635 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.340025 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.8399938 +0000 UTC m=+151.695921818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.340475 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.340894 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.840877792 +0000 UTC m=+151.696805820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.382155 4991 generic.go:334] "Generic (PLEG): container finished" podID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerID="a928eb012227e03c8febc4a3ff4dd7fb6f5e30e4c7d5b430dcd56e7cd478752b" exitCode=0 Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.382337 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerDied","Data":"a928eb012227e03c8febc4a3ff4dd7fb6f5e30e4c7d5b430dcd56e7cd478752b"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.382373 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerStarted","Data":"8bb2b66fca5f0829973ee5506a5d102df5e5106c1a31daeb17ccfa5e02c92313"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.406409 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aac9102e00d4882860fb622930d730937bfebb540576040853447e9578ffa743"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.406493 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77bfd041d3b756e69fa06869d95ac84d821b9ac00e7cd90d3322c1ad559e0166"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.409317 4991 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.415335 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gm965" event={"ID":"877cd408-2088-468f-bff8-58c539e077a9","Type":"ContainerStarted","Data":"9a180305024aeb4acd97ce952d233a0e5e625fc895f5e8d348ddc06e2349eea2"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.420525 4991 generic.go:334] "Generic (PLEG): container finished" podID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerID="4d288e72838836d83fdbb2b70f0d07e1eacaf064623c4053ced4549873802353" exitCode=0 Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.420821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerDied","Data":"4d288e72838836d83fdbb2b70f0d07e1eacaf064623c4053ced4549873802353"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.420984 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerStarted","Data":"f590d1c843dcaac7e3047dd98d16be82cb72b3e040b1f699887849e1b953ce9f"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.432584 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1f7565c251627ffceefc27cd43e2344466ca5190323411b8cb5deacde4891b2a"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.432634 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e0b5f82b427bff3f836a5933dc6bc7abf2536abf82d28ac4da965d726b531c1f"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.436912 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8523793-bd2d-46d0-9f39-98a5f1800cf4","Type":"ContainerStarted","Data":"67a7b67a576cf1c9606bdaf382dfab6cd53fa1df802a246fa6e85babfb07f212"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.446756 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.447346 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:15.947324115 +0000 UTC m=+151.803252143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.487315 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69e8d1b2fe6143bc1a63d07b9d5e812e345aaca3c58f81b1bb0d1220498a6df9"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.487392 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1c2ebd013493c1d3cd8c13302899f1149caf6bef09a450216188f711d63c523"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.487598 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.505567 4991 generic.go:334] "Generic (PLEG): container finished" podID="4211580a-2aa7-4e67-9302-b707342904f1" containerID="da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347" exitCode=0 Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.505910 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerDied","Data":"da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.505985 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerStarted","Data":"4f10f3ae1913223d47299c2b85275a4ddbe0c13ff28441dff0324f1d6ecd2f2b"} Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.520817 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xs8f" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.549677 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.551141 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.051110651 +0000 UTC m=+151.907038739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.557744 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:15 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:15 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:15 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.557801 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.623897 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.623996 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.624277 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.624368 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.663374 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.663553 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.664149 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.664382 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.164365326 +0000 UTC m=+152.020293354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.664505 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.668886 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.168859409 +0000 UTC m=+152.024787617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.673839 4991 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tqxb6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]log ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]etcd ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/start-apiserver-admission-initializer ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/generic-apiserver-start-informers ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/max-in-flight-filter ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/storage-object-count-tracker-hook ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/image.openshift.io-apiserver-caches ok Sep 29 09:40:15 crc kubenswrapper[4991]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/project.openshift.io-projectcache ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/openshift.io-startinformers ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/openshift.io-restmapperupdater ok Sep 29 09:40:15 crc kubenswrapper[4991]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Sep 29 09:40:15 crc kubenswrapper[4991]: livez check failed Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.673914 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" podUID="7c72b3f6-dc79-4c2c-a0e8-44c41ff60370" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.766933 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.767881 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.267185597 +0000 UTC m=+152.123113625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.770136 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.771285 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.271228969 +0000 UTC m=+152.127157177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.872873 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.873419 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.373394124 +0000 UTC m=+152.229322152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:15 crc kubenswrapper[4991]: I0929 09:40:15.975580 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:15 crc kubenswrapper[4991]: E0929 09:40:15.976556 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.476535673 +0000 UTC m=+152.332463701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.077427 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:16 crc kubenswrapper[4991]: E0929 09:40:16.077825 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.577803115 +0000 UTC m=+152.433731143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.077938 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:16 crc kubenswrapper[4991]: E0929 09:40:16.078262 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-29 09:40:16.578253946 +0000 UTC m=+152.434181974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6g68t" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.123167 4991 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-09-29T09:40:15.409350974Z","Handler":null,"Name":""} Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.139682 4991 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.139749 4991 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.179048 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.188257 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.280598 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.290931 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.291036 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.376872 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6g68t\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.551494 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.556451 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:16 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:16 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:16 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.556499 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.563478 4991 generic.go:334] "Generic (PLEG): container finished" podID="5519a044-7a20-4a2a-ab24-7b09bfdf59bd" containerID="8f85b790dcf39eefb4516bebb8a8baa60dba324fc18fcb1a8dc673ea316c4c02" exitCode=0 Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.563532 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" event={"ID":"5519a044-7a20-4a2a-ab24-7b09bfdf59bd","Type":"ContainerDied","Data":"8f85b790dcf39eefb4516bebb8a8baa60dba324fc18fcb1a8dc673ea316c4c02"} Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.567004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gm965" event={"ID":"877cd408-2088-468f-bff8-58c539e077a9","Type":"ContainerStarted","Data":"f8c595967b364e33c73f4cba9486900259932dd657d88eece366592d8dfb4057"} Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.577600 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.577661 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.581027 4991 patch_prober.go:28] interesting pod/console-f9d7485db-62rs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.581143 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-62rs2" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" probeResult="failure" output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.583020 4991 generic.go:334] "Generic (PLEG): container finished" podID="a8523793-bd2d-46d0-9f39-98a5f1800cf4" containerID="eef59a8307d825abbf6564504e001c512e025a81eda3129ead76ac2c99f32862" exitCode=0 Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.583288 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8523793-bd2d-46d0-9f39-98a5f1800cf4","Type":"ContainerDied","Data":"eef59a8307d825abbf6564504e001c512e025a81eda3129ead76ac2c99f32862"} Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.645412 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:16 crc kubenswrapper[4991]: I0929 09:40:16.948550 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.317651 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:40:17 crc kubenswrapper[4991]: W0929 09:40:17.454468 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980e8489_6002_47dd_b6c4_1be37ee0bad9.slice/crio-1f4403c4f96055b203c15740afa0202c71581ffd2c8e7c42f68116678cb9e810 WatchSource:0}: Error finding container 1f4403c4f96055b203c15740afa0202c71581ffd2c8e7c42f68116678cb9e810: Status 404 returned error can't find the container with id 1f4403c4f96055b203c15740afa0202c71581ffd2c8e7c42f68116678cb9e810 Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.555469 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:17 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:17 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:17 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.555542 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.625396 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gm965" event={"ID":"877cd408-2088-468f-bff8-58c539e077a9","Type":"ContainerStarted","Data":"7ec1118d2c02021d1cb9731c80d52cca6c78f29f6651c037b58dcf817f93ab64"} Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.642365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" event={"ID":"980e8489-6002-47dd-b6c4-1be37ee0bad9","Type":"ContainerStarted","Data":"1f4403c4f96055b203c15740afa0202c71581ffd2c8e7c42f68116678cb9e810"} Sep 29 09:40:17 crc kubenswrapper[4991]: I0929 09:40:17.656482 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gm965" podStartSLOduration=14.656457321 podStartE2EDuration="14.656457321s" podCreationTimestamp="2025-09-29 09:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:17.654787479 +0000 UTC m=+153.510715507" watchObservedRunningTime="2025-09-29 09:40:17.656457321 +0000 UTC m=+153.512385349" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.035873 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.132546 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir\") pod \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.132703 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access\") pod \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\" (UID: \"a8523793-bd2d-46d0-9f39-98a5f1800cf4\") " Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.132888 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8523793-bd2d-46d0-9f39-98a5f1800cf4" (UID: "a8523793-bd2d-46d0-9f39-98a5f1800cf4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.133031 4991 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.158442 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8523793-bd2d-46d0-9f39-98a5f1800cf4" (UID: "a8523793-bd2d-46d0-9f39-98a5f1800cf4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.234469 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8523793-bd2d-46d0-9f39-98a5f1800cf4-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.253165 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.336293 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume\") pod \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.336974 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume\") pod \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.337079 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs2hq\" (UniqueName: \"kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq\") pod \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\" (UID: \"5519a044-7a20-4a2a-ab24-7b09bfdf59bd\") " Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.340277 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "5519a044-7a20-4a2a-ab24-7b09bfdf59bd" (UID: "5519a044-7a20-4a2a-ab24-7b09bfdf59bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.347789 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5519a044-7a20-4a2a-ab24-7b09bfdf59bd" (UID: "5519a044-7a20-4a2a-ab24-7b09bfdf59bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.352159 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq" (OuterVolumeSpecName: "kube-api-access-rs2hq") pod "5519a044-7a20-4a2a-ab24-7b09bfdf59bd" (UID: "5519a044-7a20-4a2a-ab24-7b09bfdf59bd"). InnerVolumeSpecName "kube-api-access-rs2hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.439826 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.439875 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs2hq\" (UniqueName: \"kubernetes.io/projected/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-kube-api-access-rs2hq\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.439888 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5519a044-7a20-4a2a-ab24-7b09bfdf59bd-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.555487 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:18 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:18 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:18 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.555564 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.662852 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" event={"ID":"980e8489-6002-47dd-b6c4-1be37ee0bad9","Type":"ContainerStarted","Data":"5c37ce28d04eb77e38300c56c4711d9d49be0666111cb426005cd0bbd6a19549"} Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.663048 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.667812 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a8523793-bd2d-46d0-9f39-98a5f1800cf4","Type":"ContainerDied","Data":"67a7b67a576cf1c9606bdaf382dfab6cd53fa1df802a246fa6e85babfb07f212"} Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.667863 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a7b67a576cf1c9606bdaf382dfab6cd53fa1df802a246fa6e85babfb07f212" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.667990 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.678500 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" event={"ID":"5519a044-7a20-4a2a-ab24-7b09bfdf59bd","Type":"ContainerDied","Data":"f15e7652e1760a553a30e6ec4bebd12fb707aff012d5e864e17af8beecb4f8ce"} Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.678590 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15e7652e1760a553a30e6ec4bebd12fb707aff012d5e864e17af8beecb4f8ce" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.679159 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz" Sep 29 09:40:18 crc kubenswrapper[4991]: I0929 09:40:18.689962 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" podStartSLOduration=133.689915416 podStartE2EDuration="2m13.689915416s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:18.682651182 +0000 UTC m=+154.538579220" watchObservedRunningTime="2025-09-29 09:40:18.689915416 +0000 UTC m=+154.545843444" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.561394 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:19 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:19 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:19 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.561861 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.742286 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 29 09:40:19 crc kubenswrapper[4991]: E0929 09:40:19.742830 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8523793-bd2d-46d0-9f39-98a5f1800cf4" containerName="pruner" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.742850 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8523793-bd2d-46d0-9f39-98a5f1800cf4" containerName="pruner" Sep 29 09:40:19 crc kubenswrapper[4991]: E0929 09:40:19.742870 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5519a044-7a20-4a2a-ab24-7b09bfdf59bd" containerName="collect-profiles" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.742878 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5519a044-7a20-4a2a-ab24-7b09bfdf59bd" containerName="collect-profiles" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.743030 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5519a044-7a20-4a2a-ab24-7b09bfdf59bd" containerName="collect-profiles" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.743046 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8523793-bd2d-46d0-9f39-98a5f1800cf4" containerName="pruner" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.743671 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.754551 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.754810 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.759170 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.869608 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.869677 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.971452 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.971565 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.972192 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:19 crc kubenswrapper[4991]: I0929 09:40:19.992329 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.067701 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.499974 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.557448 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:20 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:20 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:20 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.557521 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.661606 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.665675 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tqxb6" Sep 29 09:40:20 crc kubenswrapper[4991]: I0929 09:40:20.711186 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"71db9e99-7aa4-4894-b18f-0afd91c282b3","Type":"ContainerStarted","Data":"f182e076f093ab63484d27e0bf442c327d693c0b29ce7fef8b7af11c4362c2d8"} Sep 29 09:40:21 crc kubenswrapper[4991]: I0929 09:40:21.559214 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:21 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:21 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:21 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:21 crc kubenswrapper[4991]: I0929 09:40:21.559844 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:21 crc kubenswrapper[4991]: I0929 09:40:21.604569 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m5krf" Sep 29 09:40:21 crc kubenswrapper[4991]: I0929 09:40:21.724246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"71db9e99-7aa4-4894-b18f-0afd91c282b3","Type":"ContainerStarted","Data":"05421ddd531372de76f00c0ee213f9f74b9977cba56508b3816ccfc27184a9ac"} Sep 29 09:40:21 crc kubenswrapper[4991]: I0929 09:40:21.746916 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.746888861 podStartE2EDuration="2.746888861s" podCreationTimestamp="2025-09-29 09:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:21.74131953 +0000 UTC m=+157.597247558" watchObservedRunningTime="2025-09-29 09:40:21.746888861 +0000 UTC m=+157.602816879" Sep 29 09:40:22 crc kubenswrapper[4991]: I0929 09:40:22.556795 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:22 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:22 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:22 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:22 crc kubenswrapper[4991]: I0929 09:40:22.556856 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:22 crc kubenswrapper[4991]: I0929 09:40:22.739123 4991 generic.go:334] "Generic (PLEG): container finished" podID="71db9e99-7aa4-4894-b18f-0afd91c282b3" containerID="05421ddd531372de76f00c0ee213f9f74b9977cba56508b3816ccfc27184a9ac" exitCode=0 Sep 29 09:40:22 crc kubenswrapper[4991]: I0929 09:40:22.739191 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"71db9e99-7aa4-4894-b18f-0afd91c282b3","Type":"ContainerDied","Data":"05421ddd531372de76f00c0ee213f9f74b9977cba56508b3816ccfc27184a9ac"} Sep 29 09:40:23 crc kubenswrapper[4991]: I0929 09:40:23.555285 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:23 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:23 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:23 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:23 crc kubenswrapper[4991]: I0929 09:40:23.555392 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:24 crc kubenswrapper[4991]: I0929 09:40:24.557864 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:24 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:24 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:24 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:24 crc kubenswrapper[4991]: I0929 09:40:24.558456 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.568312 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:25 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:25 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:25 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.568395 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.624369 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.624442 4991 patch_prober.go:28] interesting pod/downloads-7954f5f757-jqqrh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.624526 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:25 crc kubenswrapper[4991]: I0929 09:40:25.624493 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jqqrh" podUID="d949b064-ca4d-454e-a0d9-2aa9dd40d4e1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Sep 29 09:40:26 crc kubenswrapper[4991]: I0929 09:40:26.555519 4991 patch_prober.go:28] interesting pod/router-default-5444994796-xvz8h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 29 09:40:26 crc kubenswrapper[4991]: [-]has-synced failed: reason withheld Sep 29 09:40:26 crc kubenswrapper[4991]: [+]process-running ok Sep 29 09:40:26 crc kubenswrapper[4991]: healthz check failed Sep 29 09:40:26 crc kubenswrapper[4991]: I0929 09:40:26.555592 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xvz8h" podUID="73a7f61a-69f4-4f78-8f1e-1351448b129a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 29 09:40:26 crc kubenswrapper[4991]: I0929 09:40:26.575184 4991 patch_prober.go:28] interesting pod/console-f9d7485db-62rs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Sep 29 09:40:26 crc kubenswrapper[4991]: I0929 09:40:26.575317 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-62rs2" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" probeResult="failure" output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" Sep 29 09:40:27 crc kubenswrapper[4991]: I0929 09:40:27.554194 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:27 crc kubenswrapper[4991]: I0929 09:40:27.558002 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xvz8h" Sep 29 09:40:27 crc kubenswrapper[4991]: I0929 09:40:27.836817 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:40:27 crc kubenswrapper[4991]: I0929 09:40:27.844140 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48c35818-43cb-4bbf-bf05-37fd375d0d70-metrics-certs\") pod \"network-metrics-daemon-7m5sp\" (UID: \"48c35818-43cb-4bbf-bf05-37fd375d0d70\") " pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.042398 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7m5sp" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.665085 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.750691 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access\") pod \"71db9e99-7aa4-4894-b18f-0afd91c282b3\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.751200 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir\") pod \"71db9e99-7aa4-4894-b18f-0afd91c282b3\" (UID: \"71db9e99-7aa4-4894-b18f-0afd91c282b3\") " Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.751325 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "71db9e99-7aa4-4894-b18f-0afd91c282b3" (UID: "71db9e99-7aa4-4894-b18f-0afd91c282b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.751529 4991 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71db9e99-7aa4-4894-b18f-0afd91c282b3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.754562 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "71db9e99-7aa4-4894-b18f-0afd91c282b3" (UID: "71db9e99-7aa4-4894-b18f-0afd91c282b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.805706 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"71db9e99-7aa4-4894-b18f-0afd91c282b3","Type":"ContainerDied","Data":"f182e076f093ab63484d27e0bf442c327d693c0b29ce7fef8b7af11c4362c2d8"} Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.805769 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f182e076f093ab63484d27e0bf442c327d693c0b29ce7fef8b7af11c4362c2d8" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.805848 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 29 09:40:28 crc kubenswrapper[4991]: I0929 09:40:28.853064 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71db9e99-7aa4-4894-b18f-0afd91c282b3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 29 09:40:35 crc kubenswrapper[4991]: I0929 09:40:35.630839 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jqqrh" Sep 29 09:40:36 crc kubenswrapper[4991]: I0929 09:40:36.578737 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:36 crc kubenswrapper[4991]: I0929 09:40:36.584592 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:40:36 crc kubenswrapper[4991]: I0929 09:40:36.652401 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:40:37 crc kubenswrapper[4991]: I0929 09:40:37.946448 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:40:37 crc kubenswrapper[4991]: I0929 09:40:37.946614 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:40:44 crc kubenswrapper[4991]: E0929 09:40:44.830002 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 29 09:40:44 crc kubenswrapper[4991]: E0929 09:40:44.831031 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2xlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-m77cd_openshift-marketplace(58705f67-478a-4a9e-a5e6-3fd3b008def7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:44 crc kubenswrapper[4991]: E0929 09:40:44.832209 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-m77cd" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" Sep 29 09:40:46 crc kubenswrapper[4991]: I0929 09:40:46.421075 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2tsv6" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.764568 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-m77cd" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.874265 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.874482 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rptnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tsd8p_openshift-marketplace(4211580a-2aa7-4e67-9302-b707342904f1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.875740 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tsd8p" podUID="4211580a-2aa7-4e67-9302-b707342904f1" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.891758 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.892256 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxw79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hfcp2_openshift-marketplace(78d83a54-a82d-4b17-9d4b-7bc0c2850a9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:49 crc kubenswrapper[4991]: E0929 09:40:49.893679 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hfcp2" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.224502 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tsd8p" podUID="4211580a-2aa7-4e67-9302-b707342904f1" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.224577 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hfcp2" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.305325 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.305488 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rvh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bg24w_openshift-marketplace(c5c835c8-7524-481c-a79c-46a8c860df8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.306629 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bg24w" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.309432 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.309639 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggnsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t8fnv_openshift-marketplace(f37413ae-c89d-40f3-80a9-74ae2723f9bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.311001 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t8fnv" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.355174 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.355411 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dt2jf_openshift-marketplace(bab79a2c-0c2e-4112-b5b5-1a421b6a0703): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 29 09:40:51 crc kubenswrapper[4991]: E0929 09:40:51.356725 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dt2jf" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" Sep 29 09:40:51 crc kubenswrapper[4991]: I0929 09:40:51.621102 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7m5sp"] Sep 29 09:40:51 crc kubenswrapper[4991]: W0929 09:40:51.669861 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c35818_43cb_4bbf_bf05_37fd375d0d70.slice/crio-58aa4a128cd8e5e89806c45f7e2b39940254cc898b08bbdffea3685bccab9123 WatchSource:0}: Error finding container 58aa4a128cd8e5e89806c45f7e2b39940254cc898b08bbdffea3685bccab9123: Status 404 returned error can't find the container with id 58aa4a128cd8e5e89806c45f7e2b39940254cc898b08bbdffea3685bccab9123 Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.011444 4991 generic.go:334] "Generic (PLEG): container finished" podID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerID="0062a6fbaddb55a0ee7c2515f99495b8dd558fdd735b77f8051433b35345625b" exitCode=0 Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.011527 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerDied","Data":"0062a6fbaddb55a0ee7c2515f99495b8dd558fdd735b77f8051433b35345625b"} Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.013590 4991 generic.go:334] "Generic (PLEG): container finished" podID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerID="44dd1280de696e8816c83d7a26f592a2707fd585e200f42f8f9e52fc3d89f9b4" exitCode=0 Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.013647 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerDied","Data":"44dd1280de696e8816c83d7a26f592a2707fd585e200f42f8f9e52fc3d89f9b4"} Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.021086 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" event={"ID":"48c35818-43cb-4bbf-bf05-37fd375d0d70","Type":"ContainerStarted","Data":"a8b43a25364ba8246a6c83ed6640beea2e0106d064ea6e190ec16e124e9470af"} Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.021139 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" event={"ID":"48c35818-43cb-4bbf-bf05-37fd375d0d70","Type":"ContainerStarted","Data":"58aa4a128cd8e5e89806c45f7e2b39940254cc898b08bbdffea3685bccab9123"} Sep 29 09:40:52 crc kubenswrapper[4991]: E0929 09:40:52.022377 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t8fnv" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" Sep 29 09:40:52 crc kubenswrapper[4991]: E0929 09:40:52.022377 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dt2jf" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" Sep 29 09:40:52 crc kubenswrapper[4991]: E0929 09:40:52.025724 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bg24w" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" Sep 29 09:40:52 crc kubenswrapper[4991]: I0929 09:40:52.981383 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.032411 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerStarted","Data":"31dddffb80e6de31f85b06674a346c461bb899e5abdd0fd7d57fe929e39ff5ae"} Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.036103 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7m5sp" event={"ID":"48c35818-43cb-4bbf-bf05-37fd375d0d70","Type":"ContainerStarted","Data":"071fd2e69a19b68e7511b292c78b53bd57fd1c26638b0d8fd12341c91721243e"} Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.045141 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerStarted","Data":"ffd5b08668771d44fe67a7093e2d6cfe6e6cad18a8c1698520ddad6eb47f441f"} Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.060631 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7dpv" podStartSLOduration=3.807779827 podStartE2EDuration="43.060605099s" podCreationTimestamp="2025-09-29 09:40:10 +0000 UTC" firstStartedPulling="2025-09-29 09:40:13.176202631 +0000 UTC m=+149.032130659" lastFinishedPulling="2025-09-29 09:40:52.429027903 +0000 UTC m=+188.284955931" observedRunningTime="2025-09-29 09:40:53.058355242 +0000 UTC m=+188.914283370" watchObservedRunningTime="2025-09-29 09:40:53.060605099 +0000 UTC m=+188.916533147" Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.108653 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlq28" podStartSLOduration=2.949979261 podStartE2EDuration="41.108626894s" podCreationTimestamp="2025-09-29 09:40:12 +0000 UTC" firstStartedPulling="2025-09-29 09:40:14.377702985 +0000 UTC m=+150.233631013" lastFinishedPulling="2025-09-29 09:40:52.536350618 +0000 UTC m=+188.392278646" observedRunningTime="2025-09-29 09:40:53.102766956 +0000 UTC m=+188.958694994" watchObservedRunningTime="2025-09-29 09:40:53.108626894 +0000 UTC m=+188.964554932" Sep 29 09:40:53 crc kubenswrapper[4991]: I0929 09:40:53.129989 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7m5sp" podStartSLOduration=168.129941243 podStartE2EDuration="2m48.129941243s" podCreationTimestamp="2025-09-29 09:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:40:53.125695766 +0000 UTC m=+188.981623834" watchObservedRunningTime="2025-09-29 09:40:53.129941243 +0000 UTC m=+188.985869281" Sep 29 09:41:00 crc kubenswrapper[4991]: I0929 09:41:00.568685 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:41:00 crc kubenswrapper[4991]: I0929 09:41:00.569596 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:41:00 crc kubenswrapper[4991]: I0929 09:41:00.955580 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:41:01 crc kubenswrapper[4991]: I0929 09:41:01.161139 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:41:02 crc kubenswrapper[4991]: I0929 09:41:02.636641 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:41:02 crc kubenswrapper[4991]: I0929 09:41:02.636757 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:41:02 crc kubenswrapper[4991]: I0929 09:41:02.698297 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:41:03 crc kubenswrapper[4991]: I0929 09:41:03.185536 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:41:04 crc kubenswrapper[4991]: I0929 09:41:04.126150 4991 generic.go:334] "Generic (PLEG): container finished" podID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerID="27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7" exitCode=0 Sep 29 09:41:04 crc kubenswrapper[4991]: I0929 09:41:04.126227 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerDied","Data":"27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7"} Sep 29 09:41:04 crc kubenswrapper[4991]: I0929 09:41:04.131162 4991 generic.go:334] "Generic (PLEG): container finished" podID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerID="fb4f97168ca6ad241b76298c8bbe2399093dbbb4fbdeb5d0fdc6b42cf4d90b41" exitCode=0 Sep 29 09:41:04 crc kubenswrapper[4991]: I0929 09:41:04.131435 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerDied","Data":"fb4f97168ca6ad241b76298c8bbe2399093dbbb4fbdeb5d0fdc6b42cf4d90b41"} Sep 29 09:41:05 crc kubenswrapper[4991]: I0929 09:41:05.137632 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerStarted","Data":"84facea2cc5eced6fd2ed724bca8d57f0b4c67fe4526fc0242787f8414282bcb"} Sep 29 09:41:05 crc kubenswrapper[4991]: I0929 09:41:05.139987 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerStarted","Data":"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514"} Sep 29 09:41:05 crc kubenswrapper[4991]: I0929 09:41:05.160652 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m77cd" podStartSLOduration=3.016510209 podStartE2EDuration="52.160628899s" podCreationTimestamp="2025-09-29 09:40:13 +0000 UTC" firstStartedPulling="2025-09-29 09:40:15.422720283 +0000 UTC m=+151.278648311" lastFinishedPulling="2025-09-29 09:41:04.566838953 +0000 UTC m=+200.422767001" observedRunningTime="2025-09-29 09:41:05.15793113 +0000 UTC m=+201.013859168" watchObservedRunningTime="2025-09-29 09:41:05.160628899 +0000 UTC m=+201.016556927" Sep 29 09:41:05 crc kubenswrapper[4991]: I0929 09:41:05.178787 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hfcp2" podStartSLOduration=3.683414993 podStartE2EDuration="55.17876587s" podCreationTimestamp="2025-09-29 09:40:10 +0000 UTC" firstStartedPulling="2025-09-29 09:40:13.180177701 +0000 UTC m=+149.036105729" lastFinishedPulling="2025-09-29 09:41:04.675528578 +0000 UTC m=+200.531456606" observedRunningTime="2025-09-29 09:41:05.175149578 +0000 UTC m=+201.031077606" watchObservedRunningTime="2025-09-29 09:41:05.17876587 +0000 UTC m=+201.034693898" Sep 29 09:41:06 crc kubenswrapper[4991]: I0929 09:41:06.147149 4991 generic.go:334] "Generic (PLEG): container finished" podID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerID="a0c1d7bf0b2b35e9152395363d622c218737f16e9f4129e88d2af2627c2fe9d9" exitCode=0 Sep 29 09:41:06 crc kubenswrapper[4991]: I0929 09:41:06.147229 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerDied","Data":"a0c1d7bf0b2b35e9152395363d622c218737f16e9f4129e88d2af2627c2fe9d9"} Sep 29 09:41:06 crc kubenswrapper[4991]: I0929 09:41:06.151709 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerStarted","Data":"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e"} Sep 29 09:41:06 crc kubenswrapper[4991]: I0929 09:41:06.157046 4991 generic.go:334] "Generic (PLEG): container finished" podID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerID="dda73b3a73e4fe910a25f4bd03cd1cf718b58abec3de598df96bfe0db1fbf7c6" exitCode=0 Sep 29 09:41:06 crc kubenswrapper[4991]: I0929 09:41:06.157098 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerDied","Data":"dda73b3a73e4fe910a25f4bd03cd1cf718b58abec3de598df96bfe0db1fbf7c6"} Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.171061 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerStarted","Data":"12d506fe022654d4690bd68b8bd0a06f18e7ced6256410c546382321e59c167c"} Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.184366 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerStarted","Data":"eef834ccc0eb22a2a0e07d1bc39feaf6d0b44e765642221b37bc5e8c3d64b909"} Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.194694 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bg24w" podStartSLOduration=4.003793168 podStartE2EDuration="55.194671094s" podCreationTimestamp="2025-09-29 09:40:12 +0000 UTC" firstStartedPulling="2025-09-29 09:40:15.395556545 +0000 UTC m=+151.251484573" lastFinishedPulling="2025-09-29 09:41:06.586434471 +0000 UTC m=+202.442362499" observedRunningTime="2025-09-29 09:41:07.193656818 +0000 UTC m=+203.049584876" watchObservedRunningTime="2025-09-29 09:41:07.194671094 +0000 UTC m=+203.050599122" Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.195488 4991 generic.go:334] "Generic (PLEG): container finished" podID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerID="7e03d99da17c58582060298a61ec3ddaa3e9ae4918a38204977b4d61345b1143" exitCode=0 Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.195588 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerDied","Data":"7e03d99da17c58582060298a61ec3ddaa3e9ae4918a38204977b4d61345b1143"} Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.199295 4991 generic.go:334] "Generic (PLEG): container finished" podID="4211580a-2aa7-4e67-9302-b707342904f1" containerID="d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e" exitCode=0 Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.199353 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerDied","Data":"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e"} Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.217299 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t8fnv" podStartSLOduration=3.743489055 podStartE2EDuration="57.217272209s" podCreationTimestamp="2025-09-29 09:40:10 +0000 UTC" firstStartedPulling="2025-09-29 09:40:13.226602786 +0000 UTC m=+149.082530814" lastFinishedPulling="2025-09-29 09:41:06.70038594 +0000 UTC m=+202.556313968" observedRunningTime="2025-09-29 09:41:07.214485118 +0000 UTC m=+203.070413146" watchObservedRunningTime="2025-09-29 09:41:07.217272209 +0000 UTC m=+203.073200247" Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.946565 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.946965 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.947038 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.947656 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:41:07 crc kubenswrapper[4991]: I0929 09:41:07.947766 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40" gracePeriod=600 Sep 29 09:41:08 crc kubenswrapper[4991]: I0929 09:41:08.206718 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerStarted","Data":"cfcd2e898777d24fc52d4e35440f6e5cd7975c571ff1e53944de2be682d70fda"} Sep 29 09:41:08 crc kubenswrapper[4991]: I0929 09:41:08.211191 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerStarted","Data":"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476"} Sep 29 09:41:08 crc kubenswrapper[4991]: I0929 09:41:08.216467 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40" exitCode=0 Sep 29 09:41:08 crc kubenswrapper[4991]: I0929 09:41:08.216523 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40"} Sep 29 09:41:08 crc kubenswrapper[4991]: I0929 09:41:08.249055 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dt2jf" podStartSLOduration=3.369790207 podStartE2EDuration="58.249034776s" podCreationTimestamp="2025-09-29 09:40:10 +0000 UTC" firstStartedPulling="2025-09-29 09:40:12.880248063 +0000 UTC m=+148.736176091" lastFinishedPulling="2025-09-29 09:41:07.759492632 +0000 UTC m=+203.615420660" observedRunningTime="2025-09-29 09:41:08.227905708 +0000 UTC m=+204.083833746" watchObservedRunningTime="2025-09-29 09:41:08.249034776 +0000 UTC m=+204.104962814" Sep 29 09:41:09 crc kubenswrapper[4991]: I0929 09:41:09.227312 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9"} Sep 29 09:41:09 crc kubenswrapper[4991]: I0929 09:41:09.257578 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tsd8p" podStartSLOduration=4.125215171 podStartE2EDuration="56.257553632s" podCreationTimestamp="2025-09-29 09:40:13 +0000 UTC" firstStartedPulling="2025-09-29 09:40:15.508918343 +0000 UTC m=+151.364846371" lastFinishedPulling="2025-09-29 09:41:07.641256794 +0000 UTC m=+203.497184832" observedRunningTime="2025-09-29 09:41:08.256018993 +0000 UTC m=+204.111947031" watchObservedRunningTime="2025-09-29 09:41:09.257553632 +0000 UTC m=+205.113481670" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.670341 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.670801 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.727434 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.809333 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.809758 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:10 crc kubenswrapper[4991]: I0929 09:41:10.881547 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:11 crc kubenswrapper[4991]: I0929 09:41:11.115563 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:11 crc kubenswrapper[4991]: I0929 09:41:11.115627 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:11 crc kubenswrapper[4991]: I0929 09:41:11.187139 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:11 crc kubenswrapper[4991]: I0929 09:41:11.295799 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:11 crc kubenswrapper[4991]: I0929 09:41:11.298378 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.119667 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.119943 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.188091 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.294464 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.594599 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.595073 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hfcp2" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="registry-server" containerID="cri-o://dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514" gracePeriod=2 Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.649207 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.649266 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.730607 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.765173 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.977892 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.978262 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:13 crc kubenswrapper[4991]: I0929 09:41:13.994964 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.022164 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.100446 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content\") pod \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.100540 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities\") pod \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.100599 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxw79\" (UniqueName: \"kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79\") pod \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\" (UID: \"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d\") " Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.106148 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities" (OuterVolumeSpecName: "utilities") pod "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" (UID: "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.111989 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79" (OuterVolumeSpecName: "kube-api-access-vxw79") pod "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" (UID: "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d"). InnerVolumeSpecName "kube-api-access-vxw79". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.153218 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" (UID: "78d83a54-a82d-4b17-9d4b-7bc0c2850a9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.202241 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.202277 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxw79\" (UniqueName: \"kubernetes.io/projected/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-kube-api-access-vxw79\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.202292 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.260195 4991 generic.go:334] "Generic (PLEG): container finished" podID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerID="dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514" exitCode=0 Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.260229 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerDied","Data":"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514"} Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.260258 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hfcp2" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.260290 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hfcp2" event={"ID":"78d83a54-a82d-4b17-9d4b-7bc0c2850a9d","Type":"ContainerDied","Data":"78d8a00a62bf4f53d0757b5ad183cdb73c262e4d91f4114539560da60d3eea5f"} Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.260314 4991 scope.go:117] "RemoveContainer" containerID="dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.292553 4991 scope.go:117] "RemoveContainer" containerID="27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.297898 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.304082 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hfcp2"] Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.313938 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.331012 4991 scope.go:117] "RemoveContainer" containerID="e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.331711 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.389867 4991 scope.go:117] "RemoveContainer" containerID="dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514" Sep 29 09:41:14 crc kubenswrapper[4991]: E0929 09:41:14.390400 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514\": container with ID starting with dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514 not found: ID does not exist" containerID="dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.390426 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514"} err="failed to get container status \"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514\": rpc error: code = NotFound desc = could not find container \"dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514\": container with ID starting with dff3c560b490a885de4c5504ed3f61d70e32996f32157e8b02ad3a2fded94514 not found: ID does not exist" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.390448 4991 scope.go:117] "RemoveContainer" containerID="27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7" Sep 29 09:41:14 crc kubenswrapper[4991]: E0929 09:41:14.390728 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7\": container with ID starting with 27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7 not found: ID does not exist" containerID="27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.390752 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7"} err="failed to get container status \"27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7\": rpc error: code = NotFound desc = could not find container \"27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7\": container with ID starting with 27ecaadbc5ad5cfed3de868c03c5a96931f96aa7a129de35364c36b44b2bfba7 not found: ID does not exist" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.390766 4991 scope.go:117] "RemoveContainer" containerID="e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43" Sep 29 09:41:14 crc kubenswrapper[4991]: E0929 09:41:14.400429 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43\": container with ID starting with e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43 not found: ID does not exist" containerID="e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.400462 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43"} err="failed to get container status \"e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43\": rpc error: code = NotFound desc = could not find container \"e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43\": container with ID starting with e06d1899d8ad0e3e0fbe11658b5e2975ae4d71bf3383f9b24331cb5ce737bd43 not found: ID does not exist" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.933295 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" path="/var/lib/kubelet/pods/78d83a54-a82d-4b17-9d4b-7bc0c2850a9d/volumes" Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.994082 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:41:14 crc kubenswrapper[4991]: I0929 09:41:14.994494 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t8fnv" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="registry-server" containerID="cri-o://eef834ccc0eb22a2a0e07d1bc39feaf6d0b44e765642221b37bc5e8c3d64b909" gracePeriod=2 Sep 29 09:41:15 crc kubenswrapper[4991]: I0929 09:41:15.992504 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:41:15 crc kubenswrapper[4991]: I0929 09:41:15.992798 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bg24w" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="registry-server" containerID="cri-o://12d506fe022654d4690bd68b8bd0a06f18e7ced6256410c546382321e59c167c" gracePeriod=2 Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.288658 4991 generic.go:334] "Generic (PLEG): container finished" podID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerID="12d506fe022654d4690bd68b8bd0a06f18e7ced6256410c546382321e59c167c" exitCode=0 Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.288737 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerDied","Data":"12d506fe022654d4690bd68b8bd0a06f18e7ced6256410c546382321e59c167c"} Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.295625 4991 generic.go:334] "Generic (PLEG): container finished" podID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerID="eef834ccc0eb22a2a0e07d1bc39feaf6d0b44e765642221b37bc5e8c3d64b909" exitCode=0 Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.295703 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerDied","Data":"eef834ccc0eb22a2a0e07d1bc39feaf6d0b44e765642221b37bc5e8c3d64b909"} Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.412569 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.515975 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.531052 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities\") pod \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.531185 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content\") pod \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.531250 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggnsz\" (UniqueName: \"kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz\") pod \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\" (UID: \"f37413ae-c89d-40f3-80a9-74ae2723f9bd\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.531985 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities" (OuterVolumeSpecName: "utilities") pod "f37413ae-c89d-40f3-80a9-74ae2723f9bd" (UID: "f37413ae-c89d-40f3-80a9-74ae2723f9bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.537727 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz" (OuterVolumeSpecName: "kube-api-access-ggnsz") pod "f37413ae-c89d-40f3-80a9-74ae2723f9bd" (UID: "f37413ae-c89d-40f3-80a9-74ae2723f9bd"). InnerVolumeSpecName "kube-api-access-ggnsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.598913 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f37413ae-c89d-40f3-80a9-74ae2723f9bd" (UID: "f37413ae-c89d-40f3-80a9-74ae2723f9bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632323 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content\") pod \"c5c835c8-7524-481c-a79c-46a8c860df8a\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632414 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities\") pod \"c5c835c8-7524-481c-a79c-46a8c860df8a\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632456 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rvh8\" (UniqueName: \"kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8\") pod \"c5c835c8-7524-481c-a79c-46a8c860df8a\" (UID: \"c5c835c8-7524-481c-a79c-46a8c860df8a\") " Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632710 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632735 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37413ae-c89d-40f3-80a9-74ae2723f9bd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.632749 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggnsz\" (UniqueName: \"kubernetes.io/projected/f37413ae-c89d-40f3-80a9-74ae2723f9bd-kube-api-access-ggnsz\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.633388 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities" (OuterVolumeSpecName: "utilities") pod "c5c835c8-7524-481c-a79c-46a8c860df8a" (UID: "c5c835c8-7524-481c-a79c-46a8c860df8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.636242 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8" (OuterVolumeSpecName: "kube-api-access-6rvh8") pod "c5c835c8-7524-481c-a79c-46a8c860df8a" (UID: "c5c835c8-7524-481c-a79c-46a8c860df8a"). InnerVolumeSpecName "kube-api-access-6rvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.644673 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5c835c8-7524-481c-a79c-46a8c860df8a" (UID: "c5c835c8-7524-481c-a79c-46a8c860df8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.734724 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.734794 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rvh8\" (UniqueName: \"kubernetes.io/projected/c5c835c8-7524-481c-a79c-46a8c860df8a-kube-api-access-6rvh8\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:16 crc kubenswrapper[4991]: I0929 09:41:16.734815 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c835c8-7524-481c-a79c-46a8c860df8a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.303985 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8fnv" event={"ID":"f37413ae-c89d-40f3-80a9-74ae2723f9bd","Type":"ContainerDied","Data":"83043ce02e716e28c7591c040dd50e40df77c4699d218e08ee71d2b5b5557090"} Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.304330 4991 scope.go:117] "RemoveContainer" containerID="eef834ccc0eb22a2a0e07d1bc39feaf6d0b44e765642221b37bc5e8c3d64b909" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.304044 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8fnv" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.314429 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg24w" event={"ID":"c5c835c8-7524-481c-a79c-46a8c860df8a","Type":"ContainerDied","Data":"8bb2b66fca5f0829973ee5506a5d102df5e5106c1a31daeb17ccfa5e02c92313"} Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.314594 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg24w" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.327459 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.329299 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t8fnv"] Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.334699 4991 scope.go:117] "RemoveContainer" containerID="a0c1d7bf0b2b35e9152395363d622c218737f16e9f4129e88d2af2627c2fe9d9" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.344589 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.353085 4991 scope.go:117] "RemoveContainer" containerID="be8a1f5553c0a6444dbfe7ea76f3acf6da090514c7972306b0f19e86d1130cec" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.353148 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg24w"] Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.376436 4991 scope.go:117] "RemoveContainer" containerID="12d506fe022654d4690bd68b8bd0a06f18e7ced6256410c546382321e59c167c" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.394826 4991 scope.go:117] "RemoveContainer" containerID="dda73b3a73e4fe910a25f4bd03cd1cf718b58abec3de598df96bfe0db1fbf7c6" Sep 29 09:41:17 crc kubenswrapper[4991]: I0929 09:41:17.414985 4991 scope.go:117] "RemoveContainer" containerID="a928eb012227e03c8febc4a3ff4dd7fb6f5e30e4c7d5b430dcd56e7cd478752b" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.395896 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.396222 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tsd8p" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="registry-server" containerID="cri-o://10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476" gracePeriod=2 Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.820590 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.867285 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptnz\" (UniqueName: \"kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz\") pod \"4211580a-2aa7-4e67-9302-b707342904f1\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.867344 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities\") pod \"4211580a-2aa7-4e67-9302-b707342904f1\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.867391 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content\") pod \"4211580a-2aa7-4e67-9302-b707342904f1\" (UID: \"4211580a-2aa7-4e67-9302-b707342904f1\") " Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.868655 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities" (OuterVolumeSpecName: "utilities") pod "4211580a-2aa7-4e67-9302-b707342904f1" (UID: "4211580a-2aa7-4e67-9302-b707342904f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.875378 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz" (OuterVolumeSpecName: "kube-api-access-rptnz") pod "4211580a-2aa7-4e67-9302-b707342904f1" (UID: "4211580a-2aa7-4e67-9302-b707342904f1"). InnerVolumeSpecName "kube-api-access-rptnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.938386 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" path="/var/lib/kubelet/pods/c5c835c8-7524-481c-a79c-46a8c860df8a/volumes" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.939016 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" path="/var/lib/kubelet/pods/f37413ae-c89d-40f3-80a9-74ae2723f9bd/volumes" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.951880 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4211580a-2aa7-4e67-9302-b707342904f1" (UID: "4211580a-2aa7-4e67-9302-b707342904f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.968464 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rptnz\" (UniqueName: \"kubernetes.io/projected/4211580a-2aa7-4e67-9302-b707342904f1-kube-api-access-rptnz\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.968499 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:18 crc kubenswrapper[4991]: I0929 09:41:18.968510 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4211580a-2aa7-4e67-9302-b707342904f1-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.330793 4991 generic.go:334] "Generic (PLEG): container finished" podID="4211580a-2aa7-4e67-9302-b707342904f1" containerID="10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476" exitCode=0 Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.330851 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tsd8p" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.330870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerDied","Data":"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476"} Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.331270 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tsd8p" event={"ID":"4211580a-2aa7-4e67-9302-b707342904f1","Type":"ContainerDied","Data":"4f10f3ae1913223d47299c2b85275a4ddbe0c13ff28441dff0324f1d6ecd2f2b"} Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.331298 4991 scope.go:117] "RemoveContainer" containerID="10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.355752 4991 scope.go:117] "RemoveContainer" containerID="d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.364578 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.370509 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tsd8p"] Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.377019 4991 scope.go:117] "RemoveContainer" containerID="da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.396632 4991 scope.go:117] "RemoveContainer" containerID="10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476" Sep 29 09:41:19 crc kubenswrapper[4991]: E0929 09:41:19.397193 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476\": container with ID starting with 10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476 not found: ID does not exist" containerID="10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.397233 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476"} err="failed to get container status \"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476\": rpc error: code = NotFound desc = could not find container \"10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476\": container with ID starting with 10922e253d74fefdc11a1298a0daec98244f7bed8bed45069bb8dd5266dd9476 not found: ID does not exist" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.397259 4991 scope.go:117] "RemoveContainer" containerID="d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e" Sep 29 09:41:19 crc kubenswrapper[4991]: E0929 09:41:19.397664 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e\": container with ID starting with d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e not found: ID does not exist" containerID="d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.397684 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e"} err="failed to get container status \"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e\": rpc error: code = NotFound desc = could not find container \"d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e\": container with ID starting with d5dcae455d6637cf3925097debcaf93ef10264ec39165c770581ad8b6285ca0e not found: ID does not exist" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.397700 4991 scope.go:117] "RemoveContainer" containerID="da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347" Sep 29 09:41:19 crc kubenswrapper[4991]: E0929 09:41:19.398005 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347\": container with ID starting with da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347 not found: ID does not exist" containerID="da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347" Sep 29 09:41:19 crc kubenswrapper[4991]: I0929 09:41:19.398027 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347"} err="failed to get container status \"da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347\": rpc error: code = NotFound desc = could not find container \"da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347\": container with ID starting with da7221b993a6b0d15a3e375a4050a70517283944b51135be2b48904ad2bf9347 not found: ID does not exist" Sep 29 09:41:20 crc kubenswrapper[4991]: I0929 09:41:20.722000 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:41:20 crc kubenswrapper[4991]: I0929 09:41:20.933618 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4211580a-2aa7-4e67-9302-b707342904f1" path="/var/lib/kubelet/pods/4211580a-2aa7-4e67-9302-b707342904f1/volumes" Sep 29 09:41:38 crc kubenswrapper[4991]: I0929 09:41:38.794377 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerName="oauth-openshift" containerID="cri-o://1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b" gracePeriod=15 Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.291433 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.329972 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8f56ccf5-jpfws"] Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330213 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330227 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330243 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330251 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330264 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71db9e99-7aa4-4894-b18f-0afd91c282b3" containerName="pruner" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330272 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71db9e99-7aa4-4894-b18f-0afd91c282b3" containerName="pruner" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330287 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330294 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330304 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330313 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330322 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330330 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330341 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330348 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330358 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330365 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330376 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330383 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330393 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerName="oauth-openshift" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330400 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerName="oauth-openshift" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330412 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330419 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="extract-utilities" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330433 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330441 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="extract-content" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330452 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330459 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.330468 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330475 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330579 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerName="oauth-openshift" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330594 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d83a54-a82d-4b17-9d4b-7bc0c2850a9d" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330601 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4211580a-2aa7-4e67-9302-b707342904f1" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330612 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="71db9e99-7aa4-4894-b18f-0afd91c282b3" containerName="pruner" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330622 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c835c8-7524-481c-a79c-46a8c860df8a" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.330630 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37413ae-c89d-40f3-80a9-74ae2723f9bd" containerName="registry-server" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.331013 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.346795 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8f56ccf5-jpfws"] Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.440717 4991 generic.go:334] "Generic (PLEG): container finished" podID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" containerID="1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b" exitCode=0 Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.440769 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" event={"ID":"304e7533-e5b5-4db1-8480-3b7cf9b4d58d","Type":"ContainerDied","Data":"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b"} Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.440798 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.440818 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f9tdw" event={"ID":"304e7533-e5b5-4db1-8480-3b7cf9b4d58d","Type":"ContainerDied","Data":"236b611dcf4f163b9dda06da7fff9dd7f09e31f227782a1c7ce281596271214b"} Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.440850 4991 scope.go:117] "RemoveContainer" containerID="1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451505 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451556 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451588 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451653 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451697 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451735 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451767 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451797 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451834 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451868 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451899 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd25s\" (UniqueName: \"kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.451941 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452014 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452049 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection\") pod \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\" (UID: \"304e7533-e5b5-4db1-8480-3b7cf9b4d58d\") " Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452202 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-router-certs\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452252 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452296 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452330 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-error\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452363 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-audit-policies\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452408 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-login\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452850 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.452929 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-session\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453036 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-service-ca\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453082 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e96c946-faea-487e-8cc0-6861102915e2-audit-dir\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453148 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453203 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9kx9\" (UniqueName: \"kubernetes.io/projected/2e96c946-faea-487e-8cc0-6861102915e2-kube-api-access-v9kx9\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453245 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453313 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453350 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453502 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.453740 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.454116 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.454203 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.454620 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.459810 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.460209 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s" (OuterVolumeSpecName: "kube-api-access-vd25s") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "kube-api-access-vd25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.460233 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.462185 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.464234 4991 scope.go:117] "RemoveContainer" containerID="1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b" Sep 29 09:41:39 crc kubenswrapper[4991]: E0929 09:41:39.464736 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b\": container with ID starting with 1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b not found: ID does not exist" containerID="1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.464863 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b"} err="failed to get container status \"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b\": rpc error: code = NotFound desc = could not find container \"1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b\": container with ID starting with 1c97c0feab6ad8cc98607d8fc979a061d1b3bf619dbdbb1e145a16e5b8c50a4b not found: ID does not exist" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.470265 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.470369 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.470757 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.470999 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.471227 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "304e7533-e5b5-4db1-8480-3b7cf9b4d58d" (UID: "304e7533-e5b5-4db1-8480-3b7cf9b4d58d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554349 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-router-certs\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554413 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554466 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554504 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-error\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554541 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-audit-policies\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554587 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-login\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554621 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554652 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-session\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554702 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-service-ca\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554740 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e96c946-faea-487e-8cc0-6861102915e2-audit-dir\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554785 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554815 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9kx9\" (UniqueName: \"kubernetes.io/projected/2e96c946-faea-487e-8cc0-6861102915e2-kube-api-access-v9kx9\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554846 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554902 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.554989 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555005 4991 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555018 4991 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-audit-dir\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555033 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555050 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555070 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd25s\" (UniqueName: \"kubernetes.io/projected/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-kube-api-access-vd25s\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555088 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555107 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555128 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555148 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555165 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555183 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.555201 4991 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/304e7533-e5b5-4db1-8480-3b7cf9b4d58d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.556229 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e96c946-faea-487e-8cc0-6861102915e2-audit-dir\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.556530 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.556690 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-audit-policies\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.557427 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.557681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-service-ca\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.559899 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-router-certs\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.560171 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-error\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.560849 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.561695 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.562408 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.562870 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-session\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.566519 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-user-template-login\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.569011 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e96c946-faea-487e-8cc0-6861102915e2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.577076 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9kx9\" (UniqueName: \"kubernetes.io/projected/2e96c946-faea-487e-8cc0-6861102915e2-kube-api-access-v9kx9\") pod \"oauth-openshift-8f56ccf5-jpfws\" (UID: \"2e96c946-faea-487e-8cc0-6861102915e2\") " pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.653659 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.796057 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.803689 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f9tdw"] Sep 29 09:41:39 crc kubenswrapper[4991]: I0929 09:41:39.915367 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8f56ccf5-jpfws"] Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.450261 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" event={"ID":"2e96c946-faea-487e-8cc0-6861102915e2","Type":"ContainerStarted","Data":"97765947d18ea20bfc9491dc9b20c396519f80f51e3e5853bfeb1ca29dfc32af"} Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.450792 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" event={"ID":"2e96c946-faea-487e-8cc0-6861102915e2","Type":"ContainerStarted","Data":"0030ffadcdccd0f28abab055760844e8263f32b8f3a469dac15969aa4ddce2c2"} Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.452207 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.490349 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.523329 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8f56ccf5-jpfws" podStartSLOduration=27.523310599 podStartE2EDuration="27.523310599s" podCreationTimestamp="2025-09-29 09:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:41:40.484462451 +0000 UTC m=+236.340390479" watchObservedRunningTime="2025-09-29 09:41:40.523310599 +0000 UTC m=+236.379238627" Sep 29 09:41:40 crc kubenswrapper[4991]: I0929 09:41:40.934710 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304e7533-e5b5-4db1-8480-3b7cf9b4d58d" path="/var/lib/kubelet/pods/304e7533-e5b5-4db1-8480-3b7cf9b4d58d/volumes" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.361924 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.363317 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v7dpv" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="registry-server" containerID="cri-o://31dddffb80e6de31f85b06674a346c461bb899e5abdd0fd7d57fe929e39ff5ae" gracePeriod=30 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.382923 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.383316 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dt2jf" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="registry-server" containerID="cri-o://cfcd2e898777d24fc52d4e35440f6e5cd7975c571ff1e53944de2be682d70fda" gracePeriod=30 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.392200 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.392469 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" containerID="cri-o://e219100c44ef70245302ef41cf1acbe18226535f9c2686e1312b56dc43aa4d6c" gracePeriod=30 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.415995 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.416636 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wlq28" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="registry-server" containerID="cri-o://ffd5b08668771d44fe67a7093e2d6cfe6e6cad18a8c1698520ddad6eb47f441f" gracePeriod=30 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.423971 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.424212 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m77cd" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="registry-server" containerID="cri-o://84facea2cc5eced6fd2ed724bca8d57f0b4c67fe4526fc0242787f8414282bcb" gracePeriod=30 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.434056 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg5gc"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.435161 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.446829 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg5gc"] Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.576364 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.576429 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx7x\" (UniqueName: \"kubernetes.io/projected/5cfff707-29b3-4314-acfe-62ce8977d662-kube-api-access-ssx7x\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.576518 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.630394 4991 generic.go:334] "Generic (PLEG): container finished" podID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerID="ffd5b08668771d44fe67a7093e2d6cfe6e6cad18a8c1698520ddad6eb47f441f" exitCode=0 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.630449 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerDied","Data":"ffd5b08668771d44fe67a7093e2d6cfe6e6cad18a8c1698520ddad6eb47f441f"} Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.633501 4991 generic.go:334] "Generic (PLEG): container finished" podID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerID="e219100c44ef70245302ef41cf1acbe18226535f9c2686e1312b56dc43aa4d6c" exitCode=0 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.633547 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" event={"ID":"c06a05d9-0a6c-4e0a-88e8-023e829f245a","Type":"ContainerDied","Data":"e219100c44ef70245302ef41cf1acbe18226535f9c2686e1312b56dc43aa4d6c"} Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.635487 4991 generic.go:334] "Generic (PLEG): container finished" podID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerID="31dddffb80e6de31f85b06674a346c461bb899e5abdd0fd7d57fe929e39ff5ae" exitCode=0 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.635540 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerDied","Data":"31dddffb80e6de31f85b06674a346c461bb899e5abdd0fd7d57fe929e39ff5ae"} Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.641374 4991 generic.go:334] "Generic (PLEG): container finished" podID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerID="84facea2cc5eced6fd2ed724bca8d57f0b4c67fe4526fc0242787f8414282bcb" exitCode=0 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.641415 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerDied","Data":"84facea2cc5eced6fd2ed724bca8d57f0b4c67fe4526fc0242787f8414282bcb"} Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.646135 4991 generic.go:334] "Generic (PLEG): container finished" podID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerID="cfcd2e898777d24fc52d4e35440f6e5cd7975c571ff1e53944de2be682d70fda" exitCode=0 Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.646162 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerDied","Data":"cfcd2e898777d24fc52d4e35440f6e5cd7975c571ff1e53944de2be682d70fda"} Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.677827 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.677878 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx7x\" (UniqueName: \"kubernetes.io/projected/5cfff707-29b3-4314-acfe-62ce8977d662-kube-api-access-ssx7x\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.677897 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.679153 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.685746 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5cfff707-29b3-4314-acfe-62ce8977d662-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.694517 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx7x\" (UniqueName: \"kubernetes.io/projected/5cfff707-29b3-4314-acfe-62ce8977d662-kube-api-access-ssx7x\") pod \"marketplace-operator-79b997595-xg5gc\" (UID: \"5cfff707-29b3-4314-acfe-62ce8977d662\") " pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.829503 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.845917 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.901197 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.933579 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.939385 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.956359 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.981989 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities\") pod \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.982162 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content\") pod \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.982202 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9chx\" (UniqueName: \"kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx\") pod \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\" (UID: \"26ce7dbc-9efd-42fc-9539-85b6a48aeec4\") " Sep 29 09:42:09 crc kubenswrapper[4991]: I0929 09:42:09.984093 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities" (OuterVolumeSpecName: "utilities") pod "26ce7dbc-9efd-42fc-9539-85b6a48aeec4" (UID: "26ce7dbc-9efd-42fc-9539-85b6a48aeec4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.001147 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx" (OuterVolumeSpecName: "kube-api-access-h9chx") pod "26ce7dbc-9efd-42fc-9539-85b6a48aeec4" (UID: "26ce7dbc-9efd-42fc-9539-85b6a48aeec4"). InnerVolumeSpecName "kube-api-access-h9chx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.041203 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26ce7dbc-9efd-42fc-9539-85b6a48aeec4" (UID: "26ce7dbc-9efd-42fc-9539-85b6a48aeec4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083492 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities\") pod \"58705f67-478a-4a9e-a5e6-3fd3b008def7\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083536 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7vs2\" (UniqueName: \"kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2\") pod \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083571 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content\") pod \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083597 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content\") pod \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083650 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics\") pod \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083673 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca\") pod \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083720 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities\") pod \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083746 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities\") pod \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\" (UID: \"97f6a988-a989-4dce-ad77-0ef0a45cc2af\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083817 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmg6n\" (UniqueName: \"kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n\") pod \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\" (UID: \"bab79a2c-0c2e-4112-b5b5-1a421b6a0703\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083833 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content\") pod \"58705f67-478a-4a9e-a5e6-3fd3b008def7\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083868 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8288n\" (UniqueName: \"kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n\") pod \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\" (UID: \"c06a05d9-0a6c-4e0a-88e8-023e829f245a\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.083889 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xlj\" (UniqueName: \"kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj\") pod \"58705f67-478a-4a9e-a5e6-3fd3b008def7\" (UID: \"58705f67-478a-4a9e-a5e6-3fd3b008def7\") " Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.084638 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities" (OuterVolumeSpecName: "utilities") pod "bab79a2c-0c2e-4112-b5b5-1a421b6a0703" (UID: "bab79a2c-0c2e-4112-b5b5-1a421b6a0703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.085596 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities" (OuterVolumeSpecName: "utilities") pod "58705f67-478a-4a9e-a5e6-3fd3b008def7" (UID: "58705f67-478a-4a9e-a5e6-3fd3b008def7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.085891 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2" (OuterVolumeSpecName: "kube-api-access-l7vs2") pod "97f6a988-a989-4dce-ad77-0ef0a45cc2af" (UID: "97f6a988-a989-4dce-ad77-0ef0a45cc2af"). InnerVolumeSpecName "kube-api-access-l7vs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086738 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9chx\" (UniqueName: \"kubernetes.io/projected/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-kube-api-access-h9chx\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086756 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086766 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086774 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086784 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7vs2\" (UniqueName: \"kubernetes.io/projected/97f6a988-a989-4dce-ad77-0ef0a45cc2af-kube-api-access-l7vs2\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.086793 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ce7dbc-9efd-42fc-9539-85b6a48aeec4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.088308 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c06a05d9-0a6c-4e0a-88e8-023e829f245a" (UID: "c06a05d9-0a6c-4e0a-88e8-023e829f245a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.088655 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities" (OuterVolumeSpecName: "utilities") pod "97f6a988-a989-4dce-ad77-0ef0a45cc2af" (UID: "97f6a988-a989-4dce-ad77-0ef0a45cc2af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.090475 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c06a05d9-0a6c-4e0a-88e8-023e829f245a" (UID: "c06a05d9-0a6c-4e0a-88e8-023e829f245a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.090609 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n" (OuterVolumeSpecName: "kube-api-access-zmg6n") pod "bab79a2c-0c2e-4112-b5b5-1a421b6a0703" (UID: "bab79a2c-0c2e-4112-b5b5-1a421b6a0703"). InnerVolumeSpecName "kube-api-access-zmg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.093448 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj" (OuterVolumeSpecName: "kube-api-access-n2xlj") pod "58705f67-478a-4a9e-a5e6-3fd3b008def7" (UID: "58705f67-478a-4a9e-a5e6-3fd3b008def7"). InnerVolumeSpecName "kube-api-access-n2xlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.093987 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n" (OuterVolumeSpecName: "kube-api-access-8288n") pod "c06a05d9-0a6c-4e0a-88e8-023e829f245a" (UID: "c06a05d9-0a6c-4e0a-88e8-023e829f245a"). InnerVolumeSpecName "kube-api-access-8288n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.102318 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97f6a988-a989-4dce-ad77-0ef0a45cc2af" (UID: "97f6a988-a989-4dce-ad77-0ef0a45cc2af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.135066 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bab79a2c-0c2e-4112-b5b5-1a421b6a0703" (UID: "bab79a2c-0c2e-4112-b5b5-1a421b6a0703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.183366 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58705f67-478a-4a9e-a5e6-3fd3b008def7" (UID: "58705f67-478a-4a9e-a5e6-3fd3b008def7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188372 4991 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188440 4991 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c06a05d9-0a6c-4e0a-88e8-023e829f245a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188463 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188486 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmg6n\" (UniqueName: \"kubernetes.io/projected/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-kube-api-access-zmg6n\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188504 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8288n\" (UniqueName: \"kubernetes.io/projected/c06a05d9-0a6c-4e0a-88e8-023e829f245a-kube-api-access-8288n\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188521 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58705f67-478a-4a9e-a5e6-3fd3b008def7-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188538 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xlj\" (UniqueName: \"kubernetes.io/projected/58705f67-478a-4a9e-a5e6-3fd3b008def7-kube-api-access-n2xlj\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188554 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab79a2c-0c2e-4112-b5b5-1a421b6a0703-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.188571 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f6a988-a989-4dce-ad77-0ef0a45cc2af-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.354131 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xg5gc"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.652891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" event={"ID":"c06a05d9-0a6c-4e0a-88e8-023e829f245a","Type":"ContainerDied","Data":"0f7493e4696184f1d2b34e2c02f1fcdb5faec0529c6cfae4c7eb52ff9f6b5074"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.653388 4991 scope.go:117] "RemoveContainer" containerID="e219100c44ef70245302ef41cf1acbe18226535f9c2686e1312b56dc43aa4d6c" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.652919 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hz2qk" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.654407 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" event={"ID":"5cfff707-29b3-4314-acfe-62ce8977d662","Type":"ContainerStarted","Data":"30905680b1345ed63f6fa20258b32f9e6be704333c11adccbd43c7fe8df24397"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.654476 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" event={"ID":"5cfff707-29b3-4314-acfe-62ce8977d662","Type":"ContainerStarted","Data":"c7623c710950f6238c27d6198ae2c25ab9090c36ae9352803c9ef570a0711cec"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.654636 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.655704 4991 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xg5gc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.655761 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" podUID="5cfff707-29b3-4314-acfe-62ce8977d662" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.661169 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7dpv" event={"ID":"26ce7dbc-9efd-42fc-9539-85b6a48aeec4","Type":"ContainerDied","Data":"c622db4ec46d903a958408d6ac6dbdf54f2e4b588bc96311955c1a4b3f452827"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.661284 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7dpv" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.667542 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m77cd" event={"ID":"58705f67-478a-4a9e-a5e6-3fd3b008def7","Type":"ContainerDied","Data":"f590d1c843dcaac7e3047dd98d16be82cb72b3e040b1f699887849e1b953ce9f"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.667709 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m77cd" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.672461 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dt2jf" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.673603 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dt2jf" event={"ID":"bab79a2c-0c2e-4112-b5b5-1a421b6a0703","Type":"ContainerDied","Data":"09612dbfeabde5cc4f9d61d9928e08d08a2ed9ce314686762deb484dbc4320b0"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.677657 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" podStartSLOduration=1.677637994 podStartE2EDuration="1.677637994s" podCreationTimestamp="2025-09-29 09:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:42:10.674131624 +0000 UTC m=+266.530059692" watchObservedRunningTime="2025-09-29 09:42:10.677637994 +0000 UTC m=+266.533566012" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.680113 4991 scope.go:117] "RemoveContainer" containerID="31dddffb80e6de31f85b06674a346c461bb899e5abdd0fd7d57fe929e39ff5ae" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.680474 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlq28" event={"ID":"97f6a988-a989-4dce-ad77-0ef0a45cc2af","Type":"ContainerDied","Data":"228d977c929f3e557ef715cea139a385954e51f890bae0a8047188d81c507484"} Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.680582 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlq28" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.704194 4991 scope.go:117] "RemoveContainer" containerID="44dd1280de696e8816c83d7a26f592a2707fd585e200f42f8f9e52fc3d89f9b4" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.721721 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.742013 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hz2qk"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.750310 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.752445 4991 scope.go:117] "RemoveContainer" containerID="33943ac7278c697a5dce13255b55a1e8e1dfafd906f7f42c6f338d3d3d43994f" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.754496 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v7dpv"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.755941 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.765024 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dt2jf"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.765072 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.765083 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlq28"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.810243 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.814170 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m77cd"] Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.814569 4991 scope.go:117] "RemoveContainer" containerID="84facea2cc5eced6fd2ed724bca8d57f0b4c67fe4526fc0242787f8414282bcb" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.827668 4991 scope.go:117] "RemoveContainer" containerID="fb4f97168ca6ad241b76298c8bbe2399093dbbb4fbdeb5d0fdc6b42cf4d90b41" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.846037 4991 scope.go:117] "RemoveContainer" containerID="4d288e72838836d83fdbb2b70f0d07e1eacaf064623c4053ced4549873802353" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.862664 4991 scope.go:117] "RemoveContainer" containerID="cfcd2e898777d24fc52d4e35440f6e5cd7975c571ff1e53944de2be682d70fda" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.883548 4991 scope.go:117] "RemoveContainer" containerID="7e03d99da17c58582060298a61ec3ddaa3e9ae4918a38204977b4d61345b1143" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.899719 4991 scope.go:117] "RemoveContainer" containerID="1f75fefb829e51a370d3f184a7a79fab09b7b28f803199563cea6cdef223cd54" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.918118 4991 scope.go:117] "RemoveContainer" containerID="ffd5b08668771d44fe67a7093e2d6cfe6e6cad18a8c1698520ddad6eb47f441f" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.937199 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" path="/var/lib/kubelet/pods/26ce7dbc-9efd-42fc-9539-85b6a48aeec4/volumes" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.937939 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" path="/var/lib/kubelet/pods/58705f67-478a-4a9e-a5e6-3fd3b008def7/volumes" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.938509 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" path="/var/lib/kubelet/pods/97f6a988-a989-4dce-ad77-0ef0a45cc2af/volumes" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.939509 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" path="/var/lib/kubelet/pods/bab79a2c-0c2e-4112-b5b5-1a421b6a0703/volumes" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.940144 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" path="/var/lib/kubelet/pods/c06a05d9-0a6c-4e0a-88e8-023e829f245a/volumes" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.940246 4991 scope.go:117] "RemoveContainer" containerID="0062a6fbaddb55a0ee7c2515f99495b8dd558fdd735b77f8051433b35345625b" Sep 29 09:42:10 crc kubenswrapper[4991]: I0929 09:42:10.955185 4991 scope.go:117] "RemoveContainer" containerID="81435a2b96ee1a417535a4a9221e43cb186b75a9903476bee7d48d60467ad2cd" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.572846 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvdkf"] Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.573444 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.573527 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.573595 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.573651 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.573714 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.573780 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.573832 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.573882 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.573937 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574005 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574056 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574118 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574179 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574230 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574293 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574345 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574396 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574445 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574504 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574554 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574614 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574665 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="extract-content" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574719 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574768 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: E0929 09:42:11.574824 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.574874 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="extract-utilities" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575017 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06a05d9-0a6c-4e0a-88e8-023e829f245a" containerName="marketplace-operator" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575091 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="58705f67-478a-4a9e-a5e6-3fd3b008def7" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575146 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab79a2c-0c2e-4112-b5b5-1a421b6a0703" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575199 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f6a988-a989-4dce-ad77-0ef0a45cc2af" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575248 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ce7dbc-9efd-42fc-9539-85b6a48aeec4" containerName="registry-server" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.575933 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.578266 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.582918 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvdkf"] Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.695462 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xg5gc" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.721127 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frktx\" (UniqueName: \"kubernetes.io/projected/3f82d94c-b2cf-4519-a629-90b404272dfd-kube-api-access-frktx\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.721210 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-utilities\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.721272 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-catalog-content\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.779438 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pdnq7"] Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.780495 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.782227 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.783111 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdnq7"] Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.823059 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frktx\" (UniqueName: \"kubernetes.io/projected/3f82d94c-b2cf-4519-a629-90b404272dfd-kube-api-access-frktx\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.823499 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-utilities\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.823569 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-catalog-content\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.824087 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-utilities\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.824146 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f82d94c-b2cf-4519-a629-90b404272dfd-catalog-content\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.840830 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frktx\" (UniqueName: \"kubernetes.io/projected/3f82d94c-b2cf-4519-a629-90b404272dfd-kube-api-access-frktx\") pod \"certified-operators-cvdkf\" (UID: \"3f82d94c-b2cf-4519-a629-90b404272dfd\") " pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.899227 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.924787 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-utilities\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.924860 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-catalog-content\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:11 crc kubenswrapper[4991]: I0929 09:42:11.924882 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7tw\" (UniqueName: \"kubernetes.io/projected/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-kube-api-access-lr7tw\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.028595 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-utilities\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.028985 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-catalog-content\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.029015 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7tw\" (UniqueName: \"kubernetes.io/projected/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-kube-api-access-lr7tw\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.030091 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-catalog-content\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.030377 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-utilities\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.055844 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7tw\" (UniqueName: \"kubernetes.io/projected/e1c4ed58-0d22-42b9-b2dd-5fc7718049d3-kube-api-access-lr7tw\") pod \"redhat-marketplace-pdnq7\" (UID: \"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3\") " pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.109547 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.302546 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvdkf"] Sep 29 09:42:12 crc kubenswrapper[4991]: W0929 09:42:12.305936 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f82d94c_b2cf_4519_a629_90b404272dfd.slice/crio-641421d367a63ed5eba516e887d2c6c4dbdeb243bf87633f756ed68de5f1d10c WatchSource:0}: Error finding container 641421d367a63ed5eba516e887d2c6c4dbdeb243bf87633f756ed68de5f1d10c: Status 404 returned error can't find the container with id 641421d367a63ed5eba516e887d2c6c4dbdeb243bf87633f756ed68de5f1d10c Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.506821 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pdnq7"] Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.699567 4991 generic.go:334] "Generic (PLEG): container finished" podID="3f82d94c-b2cf-4519-a629-90b404272dfd" containerID="c406f09806397521e1e1c0ea1aa105705a542bd4dbfec48ca35e65897c690e24" exitCode=0 Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.700535 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvdkf" event={"ID":"3f82d94c-b2cf-4519-a629-90b404272dfd","Type":"ContainerDied","Data":"c406f09806397521e1e1c0ea1aa105705a542bd4dbfec48ca35e65897c690e24"} Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.700560 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvdkf" event={"ID":"3f82d94c-b2cf-4519-a629-90b404272dfd","Type":"ContainerStarted","Data":"641421d367a63ed5eba516e887d2c6c4dbdeb243bf87633f756ed68de5f1d10c"} Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.703349 4991 generic.go:334] "Generic (PLEG): container finished" podID="e1c4ed58-0d22-42b9-b2dd-5fc7718049d3" containerID="11f593601a70d4d815d4b63904cc1399eb1be96fa0925488d59e9c917318448b" exitCode=0 Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.704095 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdnq7" event={"ID":"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3","Type":"ContainerDied","Data":"11f593601a70d4d815d4b63904cc1399eb1be96fa0925488d59e9c917318448b"} Sep 29 09:42:12 crc kubenswrapper[4991]: I0929 09:42:12.704158 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdnq7" event={"ID":"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3","Type":"ContainerStarted","Data":"7b889e9033b2e17340e7331a685222d23558f30b54caa7d89e6a33a269e3a337"} Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.710797 4991 generic.go:334] "Generic (PLEG): container finished" podID="e1c4ed58-0d22-42b9-b2dd-5fc7718049d3" containerID="ee9ab041f4df861018121d2679949496a99515b4d659fccf05cd3aff19206b97" exitCode=0 Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.710889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdnq7" event={"ID":"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3","Type":"ContainerDied","Data":"ee9ab041f4df861018121d2679949496a99515b4d659fccf05cd3aff19206b97"} Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.714305 4991 generic.go:334] "Generic (PLEG): container finished" podID="3f82d94c-b2cf-4519-a629-90b404272dfd" containerID="5111a5a8f433f43ac6980a750ed969142a132fda0d16a647782496539750d6ec" exitCode=0 Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.714366 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvdkf" event={"ID":"3f82d94c-b2cf-4519-a629-90b404272dfd","Type":"ContainerDied","Data":"5111a5a8f433f43ac6980a750ed969142a132fda0d16a647782496539750d6ec"} Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.977677 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.979694 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:13 crc kubenswrapper[4991]: I0929 09:42:13.981739 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.029494 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.153518 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.153595 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdvs\" (UniqueName: \"kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.153766 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.179338 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5pz9"] Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.181080 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.183121 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.190162 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5pz9"] Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.255387 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.255451 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdvs\" (UniqueName: \"kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.255504 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.256338 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.256421 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.284737 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdvs\" (UniqueName: \"kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs\") pod \"redhat-operators-f7wps\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.293661 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.357233 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wzh\" (UniqueName: \"kubernetes.io/projected/ac1afc48-443a-4054-ba92-e87e384d71a0-kube-api-access-m9wzh\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.357314 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-utilities\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.357428 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-catalog-content\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.458429 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-utilities\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.458971 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-utilities\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.459181 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-catalog-content\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.459515 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9wzh\" (UniqueName: \"kubernetes.io/projected/ac1afc48-443a-4054-ba92-e87e384d71a0-kube-api-access-m9wzh\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.459459 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac1afc48-443a-4054-ba92-e87e384d71a0-catalog-content\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.478763 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9wzh\" (UniqueName: \"kubernetes.io/projected/ac1afc48-443a-4054-ba92-e87e384d71a0-kube-api-access-m9wzh\") pod \"community-operators-p5pz9\" (UID: \"ac1afc48-443a-4054-ba92-e87e384d71a0\") " pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.526514 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.727542 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pdnq7" event={"ID":"e1c4ed58-0d22-42b9-b2dd-5fc7718049d3","Type":"ContainerStarted","Data":"6e11c23c3447c0624ecc301d07ec573fd865138e120af263a19c006068f637c9"} Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.730571 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvdkf" event={"ID":"3f82d94c-b2cf-4519-a629-90b404272dfd","Type":"ContainerStarted","Data":"9656ad8448079179c56dd843758d90e2bd93f453e36b847c66d2ecbee2c46fce"} Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.747630 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pdnq7" podStartSLOduration=2.170632331 podStartE2EDuration="3.747612347s" podCreationTimestamp="2025-09-29 09:42:11 +0000 UTC" firstStartedPulling="2025-09-29 09:42:12.706395866 +0000 UTC m=+268.562323894" lastFinishedPulling="2025-09-29 09:42:14.283375882 +0000 UTC m=+270.139303910" observedRunningTime="2025-09-29 09:42:14.746108989 +0000 UTC m=+270.602037027" watchObservedRunningTime="2025-09-29 09:42:14.747612347 +0000 UTC m=+270.603540375" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.756417 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 09:42:14 crc kubenswrapper[4991]: W0929 09:42:14.762483 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda27f8f80_22e1_441b_a095_ef8d4ad55629.slice/crio-67f736e3a8031dff534dcebe588ec8791e763fcaad9ef1d092155f8eeb1af263 WatchSource:0}: Error finding container 67f736e3a8031dff534dcebe588ec8791e763fcaad9ef1d092155f8eeb1af263: Status 404 returned error can't find the container with id 67f736e3a8031dff534dcebe588ec8791e763fcaad9ef1d092155f8eeb1af263 Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.771654 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvdkf" podStartSLOduration=2.333310048 podStartE2EDuration="3.771636921s" podCreationTimestamp="2025-09-29 09:42:11 +0000 UTC" firstStartedPulling="2025-09-29 09:42:12.701211643 +0000 UTC m=+268.557139671" lastFinishedPulling="2025-09-29 09:42:14.139538516 +0000 UTC m=+269.995466544" observedRunningTime="2025-09-29 09:42:14.770332718 +0000 UTC m=+270.626260746" watchObservedRunningTime="2025-09-29 09:42:14.771636921 +0000 UTC m=+270.627564939" Sep 29 09:42:14 crc kubenswrapper[4991]: I0929 09:42:14.946932 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5pz9"] Sep 29 09:42:14 crc kubenswrapper[4991]: W0929 09:42:14.962765 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1afc48_443a_4054_ba92_e87e384d71a0.slice/crio-5a733e1b05ec97f09df53d73ff1a50e9eafb21a4ca6c1319394b1eda3e60df2e WatchSource:0}: Error finding container 5a733e1b05ec97f09df53d73ff1a50e9eafb21a4ca6c1319394b1eda3e60df2e: Status 404 returned error can't find the container with id 5a733e1b05ec97f09df53d73ff1a50e9eafb21a4ca6c1319394b1eda3e60df2e Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.737567 4991 generic.go:334] "Generic (PLEG): container finished" podID="ac1afc48-443a-4054-ba92-e87e384d71a0" containerID="1f2f637755e9966fe764668c79999a096f03870ad1a79b1768d5c8291f662cb8" exitCode=0 Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.737626 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5pz9" event={"ID":"ac1afc48-443a-4054-ba92-e87e384d71a0","Type":"ContainerDied","Data":"1f2f637755e9966fe764668c79999a096f03870ad1a79b1768d5c8291f662cb8"} Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.737979 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5pz9" event={"ID":"ac1afc48-443a-4054-ba92-e87e384d71a0","Type":"ContainerStarted","Data":"5a733e1b05ec97f09df53d73ff1a50e9eafb21a4ca6c1319394b1eda3e60df2e"} Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.739966 4991 generic.go:334] "Generic (PLEG): container finished" podID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerID="59b70974030a6d9da26ae8efa1086f71dd2a5cb211bb5436edec93faa8f12a03" exitCode=0 Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.740030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerDied","Data":"59b70974030a6d9da26ae8efa1086f71dd2a5cb211bb5436edec93faa8f12a03"} Sep 29 09:42:15 crc kubenswrapper[4991]: I0929 09:42:15.740087 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerStarted","Data":"67f736e3a8031dff534dcebe588ec8791e763fcaad9ef1d092155f8eeb1af263"} Sep 29 09:42:16 crc kubenswrapper[4991]: I0929 09:42:16.748452 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5pz9" event={"ID":"ac1afc48-443a-4054-ba92-e87e384d71a0","Type":"ContainerStarted","Data":"4def2d74692968a6c2446645e3dd0577148148c4a973d58693ccf814bce48e68"} Sep 29 09:42:16 crc kubenswrapper[4991]: I0929 09:42:16.752497 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerStarted","Data":"a5173133cca24364a7d7d4c3dedf8008232b8d62eaaf095bcf3367516fb9fb44"} Sep 29 09:42:17 crc kubenswrapper[4991]: I0929 09:42:17.760412 4991 generic.go:334] "Generic (PLEG): container finished" podID="ac1afc48-443a-4054-ba92-e87e384d71a0" containerID="4def2d74692968a6c2446645e3dd0577148148c4a973d58693ccf814bce48e68" exitCode=0 Sep 29 09:42:17 crc kubenswrapper[4991]: I0929 09:42:17.760503 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5pz9" event={"ID":"ac1afc48-443a-4054-ba92-e87e384d71a0","Type":"ContainerDied","Data":"4def2d74692968a6c2446645e3dd0577148148c4a973d58693ccf814bce48e68"} Sep 29 09:42:17 crc kubenswrapper[4991]: I0929 09:42:17.762699 4991 generic.go:334] "Generic (PLEG): container finished" podID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerID="a5173133cca24364a7d7d4c3dedf8008232b8d62eaaf095bcf3367516fb9fb44" exitCode=0 Sep 29 09:42:17 crc kubenswrapper[4991]: I0929 09:42:17.762734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerDied","Data":"a5173133cca24364a7d7d4c3dedf8008232b8d62eaaf095bcf3367516fb9fb44"} Sep 29 09:42:18 crc kubenswrapper[4991]: I0929 09:42:18.769453 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerStarted","Data":"741b0b113b6351638664a8795c18ca3462f236811998379f909a3f00e2c8a65a"} Sep 29 09:42:18 crc kubenswrapper[4991]: I0929 09:42:18.772840 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5pz9" event={"ID":"ac1afc48-443a-4054-ba92-e87e384d71a0","Type":"ContainerStarted","Data":"92cc13955881567b5c8f7a33abf301c20e5589e4ec59230279e5f143cc68fa49"} Sep 29 09:42:18 crc kubenswrapper[4991]: I0929 09:42:18.790876 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f7wps" podStartSLOduration=3.275006275 podStartE2EDuration="5.790841547s" podCreationTimestamp="2025-09-29 09:42:13 +0000 UTC" firstStartedPulling="2025-09-29 09:42:15.741568371 +0000 UTC m=+271.597496389" lastFinishedPulling="2025-09-29 09:42:18.257403633 +0000 UTC m=+274.113331661" observedRunningTime="2025-09-29 09:42:18.785606063 +0000 UTC m=+274.641534111" watchObservedRunningTime="2025-09-29 09:42:18.790841547 +0000 UTC m=+274.646769575" Sep 29 09:42:18 crc kubenswrapper[4991]: I0929 09:42:18.801211 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5pz9" podStartSLOduration=2.161085563 podStartE2EDuration="4.801196511s" podCreationTimestamp="2025-09-29 09:42:14 +0000 UTC" firstStartedPulling="2025-09-29 09:42:15.739291002 +0000 UTC m=+271.595237691" lastFinishedPulling="2025-09-29 09:42:18.379420611 +0000 UTC m=+274.235348639" observedRunningTime="2025-09-29 09:42:18.799816416 +0000 UTC m=+274.655744444" watchObservedRunningTime="2025-09-29 09:42:18.801196511 +0000 UTC m=+274.657124539" Sep 29 09:42:21 crc kubenswrapper[4991]: I0929 09:42:21.900166 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:21 crc kubenswrapper[4991]: I0929 09:42:21.900742 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:21 crc kubenswrapper[4991]: I0929 09:42:21.948005 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:22 crc kubenswrapper[4991]: I0929 09:42:22.110493 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:22 crc kubenswrapper[4991]: I0929 09:42:22.110550 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:22 crc kubenswrapper[4991]: I0929 09:42:22.159176 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:22 crc kubenswrapper[4991]: I0929 09:42:22.852454 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pdnq7" Sep 29 09:42:22 crc kubenswrapper[4991]: I0929 09:42:22.857004 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvdkf" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.294567 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.295354 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.341068 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.527496 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.527735 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.569836 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:24 crc kubenswrapper[4991]: I0929 09:42:24.856656 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5pz9" Sep 29 09:42:27 crc kubenswrapper[4991]: I0929 09:42:24.857087 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.854784 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq"] Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.856006 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.859558 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.859908 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.860072 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.864046 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.864142 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.877023 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq"] Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.938830 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhr8j\" (UniqueName: \"kubernetes.io/projected/94efb7f7-382c-4c29-b967-86e0e905f715-kube-api-access-zhr8j\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.938926 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/94efb7f7-382c-4c29-b967-86e0e905f715-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:39 crc kubenswrapper[4991]: I0929 09:42:39.938999 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/94efb7f7-382c-4c29-b967-86e0e905f715-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.040561 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhr8j\" (UniqueName: \"kubernetes.io/projected/94efb7f7-382c-4c29-b967-86e0e905f715-kube-api-access-zhr8j\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.040763 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/94efb7f7-382c-4c29-b967-86e0e905f715-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.040824 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/94efb7f7-382c-4c29-b967-86e0e905f715-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.042520 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/94efb7f7-382c-4c29-b967-86e0e905f715-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.052231 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/94efb7f7-382c-4c29-b967-86e0e905f715-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.074191 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhr8j\" (UniqueName: \"kubernetes.io/projected/94efb7f7-382c-4c29-b967-86e0e905f715-kube-api-access-zhr8j\") pod \"cluster-monitoring-operator-6d5b84845-2c8kq\" (UID: \"94efb7f7-382c-4c29-b967-86e0e905f715\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.175225 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.437559 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq"] Sep 29 09:42:40 crc kubenswrapper[4991]: I0929 09:42:40.904070 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" event={"ID":"94efb7f7-382c-4c29-b967-86e0e905f715","Type":"ContainerStarted","Data":"9a5f3bab640426a1aa20c3e5734de0edc3e5382a7017bfdb1974b2e8f4d7c41e"} Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.707206 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b2s5r"] Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.708049 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.724975 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b2s5r"] Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780534 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-registry-certificates\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780583 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-bound-sa-token\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780659 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfnt\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-kube-api-access-zhfnt\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780708 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780741 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/58633c30-bd70-41cc-8743-ad54936923d5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780782 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/58633c30-bd70-41cc-8743-ad54936923d5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780811 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-registry-tls\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.780849 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-trusted-ca\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.803936 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl"] Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.804712 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.808609 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.809086 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-xqmsz" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.816796 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl"] Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.817030 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882230 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6dgfl\" (UID: \"c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882290 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfnt\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-kube-api-access-zhfnt\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882333 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/58633c30-bd70-41cc-8743-ad54936923d5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882373 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/58633c30-bd70-41cc-8743-ad54936923d5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882412 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-registry-tls\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882479 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-trusted-ca\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882514 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-registry-certificates\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882539 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-bound-sa-token\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.882893 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/58633c30-bd70-41cc-8743-ad54936923d5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.883637 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-trusted-ca\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.883707 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/58633c30-bd70-41cc-8743-ad54936923d5-registry-certificates\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.888636 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-registry-tls\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.889390 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/58633c30-bd70-41cc-8743-ad54936923d5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.898220 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfnt\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-kube-api-access-zhfnt\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.900101 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58633c30-bd70-41cc-8743-ad54936923d5-bound-sa-token\") pod \"image-registry-66df7c8f76-b2s5r\" (UID: \"58633c30-bd70-41cc-8743-ad54936923d5\") " pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.916494 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" event={"ID":"94efb7f7-382c-4c29-b967-86e0e905f715","Type":"ContainerStarted","Data":"bae222a022f31a0ca58b1141a6ee4c83d176c159b3047eec3a8a8ed62a743e49"} Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.930744 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2c8kq" podStartSLOduration=2.152335174 podStartE2EDuration="3.930725508s" podCreationTimestamp="2025-09-29 09:42:39 +0000 UTC" firstStartedPulling="2025-09-29 09:42:40.447789637 +0000 UTC m=+296.303717675" lastFinishedPulling="2025-09-29 09:42:42.226179981 +0000 UTC m=+298.082108009" observedRunningTime="2025-09-29 09:42:42.92882525 +0000 UTC m=+298.784753298" watchObservedRunningTime="2025-09-29 09:42:42.930725508 +0000 UTC m=+298.786653536" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.984216 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6dgfl\" (UID: \"c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:42 crc kubenswrapper[4991]: I0929 09:42:42.988502 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6dgfl\" (UID: \"c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.020754 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.138085 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.269686 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-b2s5r"] Sep 29 09:42:43 crc kubenswrapper[4991]: W0929 09:42:43.280742 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58633c30_bd70_41cc_8743_ad54936923d5.slice/crio-c7266225106bb867712d797a62f96c62cf16bab0dbb11e990c5b9e6de4f09fe6 WatchSource:0}: Error finding container c7266225106bb867712d797a62f96c62cf16bab0dbb11e990c5b9e6de4f09fe6: Status 404 returned error can't find the container with id c7266225106bb867712d797a62f96c62cf16bab0dbb11e990c5b9e6de4f09fe6 Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.544587 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl"] Sep 29 09:42:43 crc kubenswrapper[4991]: W0929 09:42:43.553238 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8b70fd0_cb7a_480a_9a66_ca56ebe1f6ce.slice/crio-5148c4ca50ebea88f0df504b4629f1f14b5221afd0aae707fa825bd4e338d031 WatchSource:0}: Error finding container 5148c4ca50ebea88f0df504b4629f1f14b5221afd0aae707fa825bd4e338d031: Status 404 returned error can't find the container with id 5148c4ca50ebea88f0df504b4629f1f14b5221afd0aae707fa825bd4e338d031 Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.924431 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" event={"ID":"c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce","Type":"ContainerStarted","Data":"5148c4ca50ebea88f0df504b4629f1f14b5221afd0aae707fa825bd4e338d031"} Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.926223 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" event={"ID":"58633c30-bd70-41cc-8743-ad54936923d5","Type":"ContainerStarted","Data":"40c698842c34daf4ce03b1e1fecb6317f5c61ef806efcee0734e5ebcf394c967"} Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.926256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" event={"ID":"58633c30-bd70-41cc-8743-ad54936923d5","Type":"ContainerStarted","Data":"c7266225106bb867712d797a62f96c62cf16bab0dbb11e990c5b9e6de4f09fe6"} Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.926358 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:42:43 crc kubenswrapper[4991]: I0929 09:42:43.945703 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" podStartSLOduration=1.94568589 podStartE2EDuration="1.94568589s" podCreationTimestamp="2025-09-29 09:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:42:43.943699299 +0000 UTC m=+299.799627337" watchObservedRunningTime="2025-09-29 09:42:43.94568589 +0000 UTC m=+299.801613918" Sep 29 09:42:45 crc kubenswrapper[4991]: I0929 09:42:45.938020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" event={"ID":"c8b70fd0-cb7a-480a-9a66-ca56ebe1f6ce","Type":"ContainerStarted","Data":"d3b1a51de377c020cfe0b6d6653b9adeead4397f306f6312df0006619b173e65"} Sep 29 09:42:45 crc kubenswrapper[4991]: I0929 09:42:45.938586 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:45 crc kubenswrapper[4991]: I0929 09:42:45.949691 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" Sep 29 09:42:45 crc kubenswrapper[4991]: I0929 09:42:45.964099 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6dgfl" podStartSLOduration=2.206799343 podStartE2EDuration="3.964065886s" podCreationTimestamp="2025-09-29 09:42:42 +0000 UTC" firstStartedPulling="2025-09-29 09:42:43.55641122 +0000 UTC m=+299.412339258" lastFinishedPulling="2025-09-29 09:42:45.313677773 +0000 UTC m=+301.169605801" observedRunningTime="2025-09-29 09:42:45.958514924 +0000 UTC m=+301.814442992" watchObservedRunningTime="2025-09-29 09:42:45.964065886 +0000 UTC m=+301.819993954" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.914265 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nzg9z"] Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.916490 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.918685 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.918831 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.919149 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-fz8mr" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.919202 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Sep 29 09:42:46 crc kubenswrapper[4991]: I0929 09:42:46.954880 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nzg9z"] Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.037418 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.037735 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a16dcfdf-3523-456b-b386-2b5d41ba4752-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.037833 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmbg\" (UniqueName: \"kubernetes.io/projected/a16dcfdf-3523-456b-b386-2b5d41ba4752-kube-api-access-dxmbg\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.037918 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.139162 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a16dcfdf-3523-456b-b386-2b5d41ba4752-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.139288 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmbg\" (UniqueName: \"kubernetes.io/projected/a16dcfdf-3523-456b-b386-2b5d41ba4752-kube-api-access-dxmbg\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.139337 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.139451 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.140774 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a16dcfdf-3523-456b-b386-2b5d41ba4752-metrics-client-ca\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.148122 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.153663 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a16dcfdf-3523-456b-b386-2b5d41ba4752-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.160584 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmbg\" (UniqueName: \"kubernetes.io/projected/a16dcfdf-3523-456b-b386-2b5d41ba4752-kube-api-access-dxmbg\") pod \"prometheus-operator-db54df47d-nzg9z\" (UID: \"a16dcfdf-3523-456b-b386-2b5d41ba4752\") " pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.239300 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.686112 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-nzg9z"] Sep 29 09:42:47 crc kubenswrapper[4991]: W0929 09:42:47.697239 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16dcfdf_3523_456b_b386_2b5d41ba4752.slice/crio-ff0bf3f8bf95ac48a84aed327ceaedbd4dcbd11ac46fb4ea2a3482bb33e0ecd0 WatchSource:0}: Error finding container ff0bf3f8bf95ac48a84aed327ceaedbd4dcbd11ac46fb4ea2a3482bb33e0ecd0: Status 404 returned error can't find the container with id ff0bf3f8bf95ac48a84aed327ceaedbd4dcbd11ac46fb4ea2a3482bb33e0ecd0 Sep 29 09:42:47 crc kubenswrapper[4991]: I0929 09:42:47.950005 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" event={"ID":"a16dcfdf-3523-456b-b386-2b5d41ba4752","Type":"ContainerStarted","Data":"ff0bf3f8bf95ac48a84aed327ceaedbd4dcbd11ac46fb4ea2a3482bb33e0ecd0"} Sep 29 09:42:49 crc kubenswrapper[4991]: I0929 09:42:49.964254 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" event={"ID":"a16dcfdf-3523-456b-b386-2b5d41ba4752","Type":"ContainerStarted","Data":"9152bc8632eab1db2566105c1b17f9f9a9e3fa3490f760d3756eb80668a08274"} Sep 29 09:42:49 crc kubenswrapper[4991]: I0929 09:42:49.964832 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" event={"ID":"a16dcfdf-3523-456b-b386-2b5d41ba4752","Type":"ContainerStarted","Data":"c53150803b8397709672d88a9629d1465f878e5aaea1d5f02a9cdf9a3fd7ad5e"} Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.267635 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-nzg9z" podStartSLOduration=4.650065414 podStartE2EDuration="6.267619397s" podCreationTimestamp="2025-09-29 09:42:46 +0000 UTC" firstStartedPulling="2025-09-29 09:42:47.700939378 +0000 UTC m=+303.556867406" lastFinishedPulling="2025-09-29 09:42:49.318493361 +0000 UTC m=+305.174421389" observedRunningTime="2025-09-29 09:42:49.996132451 +0000 UTC m=+305.852060519" watchObservedRunningTime="2025-09-29 09:42:52.267619397 +0000 UTC m=+308.123547425" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.270200 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb"] Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.271141 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.274323 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.274608 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-69j5t" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.274779 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.292816 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-2wtmw"] Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.293850 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.298063 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb"] Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.301643 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx"] Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.302561 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304557 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-textfile\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304588 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304607 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304622 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86cf\" (UniqueName: \"kubernetes.io/projected/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-kube-api-access-b86cf\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304642 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lp4f\" (UniqueName: \"kubernetes.io/projected/0be8c57e-e905-4a00-a889-bcb728fe02a1-kube-api-access-9lp4f\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304657 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-root\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304687 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-sys\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304707 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0be8c57e-e905-4a00-a889-bcb728fe02a1-metrics-client-ca\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304754 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-wtmp\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304775 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.304793 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.327758 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.328178 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-kp7sk" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.328335 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.328394 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.328566 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-km4gg" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.330361 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.331078 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.353424 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx"] Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.405874 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.405932 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-textfile\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.405971 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.405998 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406018 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406043 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86cf\" (UniqueName: \"kubernetes.io/projected/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-kube-api-access-b86cf\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406063 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406087 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406109 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-root\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406125 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lp4f\" (UniqueName: \"kubernetes.io/projected/0be8c57e-e905-4a00-a889-bcb728fe02a1-kube-api-access-9lp4f\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: E0929 09:42:52.406131 4991 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406165 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-sys\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406186 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: E0929 09:42:52.406204 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls podName:0be8c57e-e905-4a00-a889-bcb728fe02a1 nodeName:}" failed. No retries permitted until 2025-09-29 09:42:52.906183548 +0000 UTC m=+308.762111676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls") pod "node-exporter-2wtmw" (UID: "0be8c57e-e905-4a00-a889-bcb728fe02a1") : secret "node-exporter-tls" not found Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406234 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0be8c57e-e905-4a00-a889-bcb728fe02a1-metrics-client-ca\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406302 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406324 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-root\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406334 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4khc\" (UniqueName: \"kubernetes.io/projected/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-api-access-p4khc\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406413 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-wtmp\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: E0929 09:42:52.406621 4991 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Sep 29 09:42:52 crc kubenswrapper[4991]: E0929 09:42:52.406650 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls podName:8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f nodeName:}" failed. No retries permitted until 2025-09-29 09:42:52.90664072 +0000 UTC m=+308.762568748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-gmzxb" (UID: "8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f") : secret "openshift-state-metrics-tls" not found Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406909 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0be8c57e-e905-4a00-a889-bcb728fe02a1-metrics-client-ca\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406964 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-sys\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.406997 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-wtmp\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.407196 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-textfile\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.407873 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.417799 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.423252 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86cf\" (UniqueName: \"kubernetes.io/projected/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-kube-api-access-b86cf\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.425714 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lp4f\" (UniqueName: \"kubernetes.io/projected/0be8c57e-e905-4a00-a889-bcb728fe02a1-kube-api-access-9lp4f\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.426795 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507556 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507625 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507651 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507673 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507723 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.507742 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4khc\" (UniqueName: \"kubernetes.io/projected/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-api-access-p4khc\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.508074 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.508763 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.509447 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.511634 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.512391 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.525764 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4khc\" (UniqueName: \"kubernetes.io/projected/e43ab55e-99d8-4f2a-99f4-6d0d40b093ea-kube-api-access-p4khc\") pod \"kube-state-metrics-777cb5bd5d-vm6bx\" (UID: \"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.653515 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.835056 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx"] Sep 29 09:42:52 crc kubenswrapper[4991]: W0929 09:42:52.838794 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ab55e_99d8_4f2a_99f4_6d0d40b093ea.slice/crio-92338817e5e8e0312230b23040da129d3005eb8c60e0b1ed865818cd953ccb3f WatchSource:0}: Error finding container 92338817e5e8e0312230b23040da129d3005eb8c60e0b1ed865818cd953ccb3f: Status 404 returned error can't find the container with id 92338817e5e8e0312230b23040da129d3005eb8c60e0b1ed865818cd953ccb3f Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.914067 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.914130 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.918309 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-gmzxb\" (UID: \"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.918634 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0be8c57e-e905-4a00-a889-bcb728fe02a1-node-exporter-tls\") pod \"node-exporter-2wtmw\" (UID: \"0be8c57e-e905-4a00-a889-bcb728fe02a1\") " pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.942590 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2wtmw" Sep 29 09:42:52 crc kubenswrapper[4991]: W0929 09:42:52.962746 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be8c57e_e905_4a00_a889_bcb728fe02a1.slice/crio-b41ee799e69a3d60dfe7b92766e97cb8a1ddb5b25cd184d927db47d0b83ba970 WatchSource:0}: Error finding container b41ee799e69a3d60dfe7b92766e97cb8a1ddb5b25cd184d927db47d0b83ba970: Status 404 returned error can't find the container with id b41ee799e69a3d60dfe7b92766e97cb8a1ddb5b25cd184d927db47d0b83ba970 Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.997138 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2wtmw" event={"ID":"0be8c57e-e905-4a00-a889-bcb728fe02a1","Type":"ContainerStarted","Data":"b41ee799e69a3d60dfe7b92766e97cb8a1ddb5b25cd184d927db47d0b83ba970"} Sep 29 09:42:52 crc kubenswrapper[4991]: I0929 09:42:52.999009 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" event={"ID":"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea","Type":"ContainerStarted","Data":"92338817e5e8e0312230b23040da129d3005eb8c60e0b1ed865818cd953ccb3f"} Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.184301 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.354884 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.365919 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.372000 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.372841 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.372940 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.373072 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.373176 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-57cj8" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.373262 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.373353 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.375518 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.380381 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.387989 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421102 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-web-config\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421149 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421167 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421188 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421210 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djstk\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-kube-api-access-djstk\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421238 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421260 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421280 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421296 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421311 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-config-out\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421324 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-config-volume\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.421357 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.430659 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb"] Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522437 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-web-config\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522483 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522519 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522543 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522564 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djstk\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-kube-api-access-djstk\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522615 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522640 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522678 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522698 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522713 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-config-out\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522729 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-config-volume\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.522783 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.524899 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.525922 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.526491 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ab92a233-e04e-4547-aaac-1696efa06288-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.529305 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.530075 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-web-config\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.530575 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.531222 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ab92a233-e04e-4547-aaac-1696efa06288-config-out\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.531335 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.538507 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.540709 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djstk\" (UniqueName: \"kubernetes.io/projected/ab92a233-e04e-4547-aaac-1696efa06288-kube-api-access-djstk\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.542820 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.547318 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ab92a233-e04e-4547-aaac-1696efa06288-config-volume\") pod \"alertmanager-main-0\" (UID: \"ab92a233-e04e-4547-aaac-1696efa06288\") " pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:53 crc kubenswrapper[4991]: I0929 09:42:53.697317 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.008178 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" event={"ID":"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f","Type":"ContainerStarted","Data":"78f962c80d9d36af0f1460cf3267f0820dd5049fdd34213430db237c942793ad"} Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.008276 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" event={"ID":"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f","Type":"ContainerStarted","Data":"9b54b98a4c4655578078379a3f32c56c4df665461d212c57b06513c401eb4e58"} Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.008294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" event={"ID":"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f","Type":"ContainerStarted","Data":"7c37048be72158954cec0717513096efa74006c8fc2340fb46b996540278b694"} Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.093643 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.226301 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-c69dcc589-8n86l"] Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.229046 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.232254 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.232891 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.233238 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.233472 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.233696 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-bamt94u26osp8" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.233860 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.233875 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-9lh8n" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.243276 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c69dcc589-8n86l"] Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332666 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332724 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-metrics-client-ca\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332754 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332806 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88jh\" (UniqueName: \"kubernetes.io/projected/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-kube-api-access-h88jh\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332842 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.332880 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-grpc-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.333011 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.333046 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434409 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434469 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434528 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434553 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-metrics-client-ca\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434576 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434600 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88jh\" (UniqueName: \"kubernetes.io/projected/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-kube-api-access-h88jh\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434628 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.434659 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-grpc-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.435689 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-metrics-client-ca\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.439913 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.440067 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.440941 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-grpc-tls\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.441160 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.441488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.443410 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.451912 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88jh\" (UniqueName: \"kubernetes.io/projected/2c9e84f1-ad27-4f5a-a47a-2f7277f32d13-kube-api-access-h88jh\") pod \"thanos-querier-c69dcc589-8n86l\" (UID: \"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13\") " pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:54 crc kubenswrapper[4991]: W0929 09:42:54.544935 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab92a233_e04e_4547_aaac_1696efa06288.slice/crio-e5efb9e3229235f6bbcf061e2624c8368181cb191b1d54e07775cfe63371aff1 WatchSource:0}: Error finding container e5efb9e3229235f6bbcf061e2624c8368181cb191b1d54e07775cfe63371aff1: Status 404 returned error can't find the container with id e5efb9e3229235f6bbcf061e2624c8368181cb191b1d54e07775cfe63371aff1 Sep 29 09:42:54 crc kubenswrapper[4991]: I0929 09:42:54.549932 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.018549 4991 generic.go:334] "Generic (PLEG): container finished" podID="0be8c57e-e905-4a00-a889-bcb728fe02a1" containerID="dd78a847f605d42c3e86c49010f4bc014d123dd18f2f262bd43dcb14f713c3d8" exitCode=0 Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.018917 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2wtmw" event={"ID":"0be8c57e-e905-4a00-a889-bcb728fe02a1","Type":"ContainerDied","Data":"dd78a847f605d42c3e86c49010f4bc014d123dd18f2f262bd43dcb14f713c3d8"} Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.021211 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"e5efb9e3229235f6bbcf061e2624c8368181cb191b1d54e07775cfe63371aff1"} Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.025484 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" event={"ID":"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea","Type":"ContainerStarted","Data":"14bece56b6b2e7aba971a66c651d06d19b3a52d66ffb7a5ed7d4ca731e847ae5"} Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.025590 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" event={"ID":"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea","Type":"ContainerStarted","Data":"b6cc205b906dd350f0a47fccb7bbf7c0538e6afc3e641afbeab8bc9e507a75ff"} Sep 29 09:42:55 crc kubenswrapper[4991]: I0929 09:42:55.057631 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-c69dcc589-8n86l"] Sep 29 09:42:55 crc kubenswrapper[4991]: W0929 09:42:55.071251 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c9e84f1_ad27_4f5a_a47a_2f7277f32d13.slice/crio-98032b9ec2f1885574769a8b94b8ebadc2491606f4915a03811a8075c48fa76e WatchSource:0}: Error finding container 98032b9ec2f1885574769a8b94b8ebadc2491606f4915a03811a8075c48fa76e: Status 404 returned error can't find the container with id 98032b9ec2f1885574769a8b94b8ebadc2491606f4915a03811a8075c48fa76e Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.033886 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2wtmw" event={"ID":"0be8c57e-e905-4a00-a889-bcb728fe02a1","Type":"ContainerStarted","Data":"8f5a4b0c89e00f3d1c4dc18af4a547d9e68ac1e0b2c8770bf0c6525ce8be396b"} Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.033935 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2wtmw" event={"ID":"0be8c57e-e905-4a00-a889-bcb728fe02a1","Type":"ContainerStarted","Data":"7e12b9530a6060948c6ba4a93c7b7668170ac18c77b2d83010507d790428ffa2"} Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.037860 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" event={"ID":"e43ab55e-99d8-4f2a-99f4-6d0d40b093ea","Type":"ContainerStarted","Data":"2e3a2297bdd5feafdffaf351ada253f985c6eac1ec361925cc3d23b7ca44eb3d"} Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.038815 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"98032b9ec2f1885574769a8b94b8ebadc2491606f4915a03811a8075c48fa76e"} Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.040417 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" event={"ID":"8e8b5acb-d8d4-41c4-8bb8-e4a37d4a0d6f","Type":"ContainerStarted","Data":"0c22bfcf43901e673869d19381149249c5c1c82f5970b6b8534aeeac48272c45"} Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.057641 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-2wtmw" podStartSLOduration=2.433612556 podStartE2EDuration="4.057618624s" podCreationTimestamp="2025-09-29 09:42:52 +0000 UTC" firstStartedPulling="2025-09-29 09:42:52.965558345 +0000 UTC m=+308.821486383" lastFinishedPulling="2025-09-29 09:42:54.589564413 +0000 UTC m=+310.445492451" observedRunningTime="2025-09-29 09:42:56.0515889 +0000 UTC m=+311.907516948" watchObservedRunningTime="2025-09-29 09:42:56.057618624 +0000 UTC m=+311.913546662" Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.079750 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-gmzxb" podStartSLOduration=2.208201046 podStartE2EDuration="4.079727709s" podCreationTimestamp="2025-09-29 09:42:52 +0000 UTC" firstStartedPulling="2025-09-29 09:42:53.680982931 +0000 UTC m=+309.536910959" lastFinishedPulling="2025-09-29 09:42:55.552509584 +0000 UTC m=+311.408437622" observedRunningTime="2025-09-29 09:42:56.066724057 +0000 UTC m=+311.922652085" watchObservedRunningTime="2025-09-29 09:42:56.079727709 +0000 UTC m=+311.935655737" Sep 29 09:42:56 crc kubenswrapper[4991]: I0929 09:42:56.089317 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vm6bx" podStartSLOduration=2.341958974 podStartE2EDuration="4.089300304s" podCreationTimestamp="2025-09-29 09:42:52 +0000 UTC" firstStartedPulling="2025-09-29 09:42:52.840763206 +0000 UTC m=+308.696691244" lastFinishedPulling="2025-09-29 09:42:54.588104526 +0000 UTC m=+310.444032574" observedRunningTime="2025-09-29 09:42:56.084188983 +0000 UTC m=+311.940117011" watchObservedRunningTime="2025-09-29 09:42:56.089300304 +0000 UTC m=+311.945228322" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.092488 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.093560 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.093857 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.277360 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.277627 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.277658 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.277681 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.277700 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5j5c\" (UniqueName: \"kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.278113 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.278208 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379591 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379652 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379690 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379722 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379753 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5j5c\" (UniqueName: \"kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379822 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.379863 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.380978 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.381118 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.381572 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.382203 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.385307 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.386207 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.395808 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5j5c\" (UniqueName: \"kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c\") pod \"console-595bf5d57c-mhhb4\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.424718 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.612243 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7bfdd9cc49-snqkt"] Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.618570 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.622782 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7q00e1a8f7cio" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.622840 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.623019 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.623116 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.623166 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dxndq" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.623390 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.630239 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7bfdd9cc49-snqkt"] Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785296 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785390 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bd5e3f97-ca66-4895-84e1-2e147221371c-audit-log\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785459 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8bj\" (UniqueName: \"kubernetes.io/projected/bd5e3f97-ca66-4895-84e1-2e147221371c-kube-api-access-dr8bj\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785592 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-client-certs\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-client-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785761 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-server-tls\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.785974 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-metrics-server-audit-profiles\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.887906 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-metrics-server-audit-profiles\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888056 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888120 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bd5e3f97-ca66-4895-84e1-2e147221371c-audit-log\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888182 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8bj\" (UniqueName: \"kubernetes.io/projected/bd5e3f97-ca66-4895-84e1-2e147221371c-kube-api-access-dr8bj\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888246 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-client-certs\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888283 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-client-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.888328 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-server-tls\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.889617 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bd5e3f97-ca66-4895-84e1-2e147221371c-audit-log\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.890432 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.891654 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bd5e3f97-ca66-4895-84e1-2e147221371c-metrics-server-audit-profiles\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.895043 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-server-tls\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.895766 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.897536 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-client-ca-bundle\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.899758 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bd5e3f97-ca66-4895-84e1-2e147221371c-secret-metrics-client-certs\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: W0929 09:42:57.904213 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode90cc054_ad78_4ef4_8b02_a3ed14f14d87.slice/crio-9715de35e1eab4ad14b87ceff500c70dc5eb6a6afde6c05e94fda46f45eb12f9 WatchSource:0}: Error finding container 9715de35e1eab4ad14b87ceff500c70dc5eb6a6afde6c05e94fda46f45eb12f9: Status 404 returned error can't find the container with id 9715de35e1eab4ad14b87ceff500c70dc5eb6a6afde6c05e94fda46f45eb12f9 Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.917038 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8bj\" (UniqueName: \"kubernetes.io/projected/bd5e3f97-ca66-4895-84e1-2e147221371c-kube-api-access-dr8bj\") pod \"metrics-server-7bfdd9cc49-snqkt\" (UID: \"bd5e3f97-ca66-4895-84e1-2e147221371c\") " pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:57 crc kubenswrapper[4991]: I0929 09:42:57.947766 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.061090 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"6727168b00ffbd850cf0ede903b5ecc0024b2dfe351898ff3f8b24ac56235100"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.061665 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"87288d305ee9774def2c64a24dc601777abd4e7cdbb10904dc808c5c5d267cee"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.061682 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"330db387aee6d2c707ff159d1196caf0479ad0b4bcee66cf654404820c67e69a"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.064270 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-mhhb4" event={"ID":"e90cc054-ad78-4ef4-8b02-a3ed14f14d87","Type":"ContainerStarted","Data":"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.064300 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-mhhb4" event={"ID":"e90cc054-ad78-4ef4-8b02-a3ed14f14d87","Type":"ContainerStarted","Data":"9715de35e1eab4ad14b87ceff500c70dc5eb6a6afde6c05e94fda46f45eb12f9"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.067844 4991 generic.go:334] "Generic (PLEG): container finished" podID="ab92a233-e04e-4547-aaac-1696efa06288" containerID="d4f3016049108a94225adde4733bb05e2345f8815e1c3ebe87624f5a177294cb" exitCode=0 Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.067908 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerDied","Data":"d4f3016049108a94225adde4733bb05e2345f8815e1c3ebe87624f5a177294cb"} Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.100071 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6"] Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.101020 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.103641 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595bf5d57c-mhhb4" podStartSLOduration=1.103612487 podStartE2EDuration="1.103612487s" podCreationTimestamp="2025-09-29 09:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:42:58.088991124 +0000 UTC m=+313.944919202" watchObservedRunningTime="2025-09-29 09:42:58.103612487 +0000 UTC m=+313.959540505" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.118706 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.118904 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.123002 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6"] Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.245215 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7bfdd9cc49-snqkt"] Sep 29 09:42:58 crc kubenswrapper[4991]: W0929 09:42:58.255162 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd5e3f97_ca66_4895_84e1_2e147221371c.slice/crio-1d2c8dcb3de20d55339b99b0be971127ace5c418eefa84a2b4e7f542f502437c WatchSource:0}: Error finding container 1d2c8dcb3de20d55339b99b0be971127ace5c418eefa84a2b4e7f542f502437c: Status 404 returned error can't find the container with id 1d2c8dcb3de20d55339b99b0be971127ace5c418eefa84a2b4e7f542f502437c Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.320882 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/07590400-e477-42bd-bc2e-35c82852ce0b-monitoring-plugin-cert\") pod \"monitoring-plugin-85769fb6d7-6szl6\" (UID: \"07590400-e477-42bd-bc2e-35c82852ce0b\") " pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.422479 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/07590400-e477-42bd-bc2e-35c82852ce0b-monitoring-plugin-cert\") pod \"monitoring-plugin-85769fb6d7-6szl6\" (UID: \"07590400-e477-42bd-bc2e-35c82852ce0b\") " pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.430359 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/07590400-e477-42bd-bc2e-35c82852ce0b-monitoring-plugin-cert\") pod \"monitoring-plugin-85769fb6d7-6szl6\" (UID: \"07590400-e477-42bd-bc2e-35c82852ce0b\") " pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.464339 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.681187 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.683002 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.687098 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.687422 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.687465 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-4jxjz" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.687506 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.688512 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-b0tvq6m4seo4a" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.688810 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.688875 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.688915 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.689043 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.689187 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.689911 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.695626 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.696499 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.699676 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728676 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728724 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728747 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728771 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728786 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728803 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728821 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-config-out\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728834 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728853 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-web-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728867 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728886 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728904 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728925 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728941 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llpt5\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-kube-api-access-llpt5\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.728976 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.729002 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.729027 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.729052 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.830872 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.830958 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.830986 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llpt5\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-kube-api-access-llpt5\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831012 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831064 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831092 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831116 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831149 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831186 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831204 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831226 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831242 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831262 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831281 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-config-out\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831297 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831314 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-web-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831328 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831347 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.831747 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.832554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.833027 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.833613 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.835470 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.837076 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.837128 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.837754 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.837928 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.838001 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-web-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.839333 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.840516 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/00121deb-d371-4086-8a93-74d8a3716533-config-out\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.844632 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.845157 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.845547 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-config\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.846180 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/00121deb-d371-4086-8a93-74d8a3716533-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.850109 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/00121deb-d371-4086-8a93-74d8a3716533-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.854524 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llpt5\" (UniqueName: \"kubernetes.io/projected/00121deb-d371-4086-8a93-74d8a3716533-kube-api-access-llpt5\") pod \"prometheus-k8s-0\" (UID: \"00121deb-d371-4086-8a93-74d8a3716533\") " pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:58 crc kubenswrapper[4991]: I0929 09:42:58.864991 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6"] Sep 29 09:42:59 crc kubenswrapper[4991]: I0929 09:42:59.004238 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:42:59 crc kubenswrapper[4991]: W0929 09:42:59.051772 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07590400_e477_42bd_bc2e_35c82852ce0b.slice/crio-e8f4de99966f337f2a69c1fd3039789685704e24f903aec875a8f16071cc3126 WatchSource:0}: Error finding container e8f4de99966f337f2a69c1fd3039789685704e24f903aec875a8f16071cc3126: Status 404 returned error can't find the container with id e8f4de99966f337f2a69c1fd3039789685704e24f903aec875a8f16071cc3126 Sep 29 09:42:59 crc kubenswrapper[4991]: I0929 09:42:59.075328 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" event={"ID":"07590400-e477-42bd-bc2e-35c82852ce0b","Type":"ContainerStarted","Data":"e8f4de99966f337f2a69c1fd3039789685704e24f903aec875a8f16071cc3126"} Sep 29 09:42:59 crc kubenswrapper[4991]: I0929 09:42:59.076282 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" event={"ID":"bd5e3f97-ca66-4895-84e1-2e147221371c","Type":"ContainerStarted","Data":"1d2c8dcb3de20d55339b99b0be971127ace5c418eefa84a2b4e7f542f502437c"} Sep 29 09:42:59 crc kubenswrapper[4991]: I0929 09:42:59.273390 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Sep 29 09:42:59 crc kubenswrapper[4991]: W0929 09:42:59.293048 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00121deb_d371_4086_8a93_74d8a3716533.slice/crio-a5d02bdd2c8b0281a6f8779bd72772188ae8f293dd1c3d77c66871a643d7e030 WatchSource:0}: Error finding container a5d02bdd2c8b0281a6f8779bd72772188ae8f293dd1c3d77c66871a643d7e030: Status 404 returned error can't find the container with id a5d02bdd2c8b0281a6f8779bd72772188ae8f293dd1c3d77c66871a643d7e030 Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.083172 4991 generic.go:334] "Generic (PLEG): container finished" podID="00121deb-d371-4086-8a93-74d8a3716533" containerID="61814cda9e5a149c65a8354e0d9e08d474b7b9aa3637da1a5ee01883ceb8052f" exitCode=0 Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.083258 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerDied","Data":"61814cda9e5a149c65a8354e0d9e08d474b7b9aa3637da1a5ee01883ceb8052f"} Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.083296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"a5d02bdd2c8b0281a6f8779bd72772188ae8f293dd1c3d77c66871a643d7e030"} Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.090499 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"e7351289b2eb375e7cd63393458b735bff5bf26f02ac1108616b6578a9460f66"} Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.090543 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"3ae47f9598a16bb8d5f42b9bc399f193d52871ecd520ecc1a121dc26b7a156be"} Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.090555 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" event={"ID":"2c9e84f1-ad27-4f5a-a47a-2f7277f32d13","Type":"ContainerStarted","Data":"9e5218584c9bb7acb48c6dc9e64a7b2bb720484ea15da89b92344530c0b58186"} Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.090656 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:43:00 crc kubenswrapper[4991]: I0929 09:43:00.150415 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" podStartSLOduration=2.115058092 podStartE2EDuration="6.15039588s" podCreationTimestamp="2025-09-29 09:42:54 +0000 UTC" firstStartedPulling="2025-09-29 09:42:55.073578063 +0000 UTC m=+310.929506091" lastFinishedPulling="2025-09-29 09:42:59.108915861 +0000 UTC m=+314.964843879" observedRunningTime="2025-09-29 09:43:00.146213263 +0000 UTC m=+316.002141291" watchObservedRunningTime="2025-09-29 09:43:00.15039588 +0000 UTC m=+316.006323908" Sep 29 09:43:01 crc kubenswrapper[4991]: I0929 09:43:01.129847 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:43:01 crc kubenswrapper[4991]: I0929 09:43:01.140809 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" Sep 29 09:43:01 crc kubenswrapper[4991]: I0929 09:43:01.149858 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" podStartSLOduration=1.300551888 podStartE2EDuration="3.149838424s" podCreationTimestamp="2025-09-29 09:42:58 +0000 UTC" firstStartedPulling="2025-09-29 09:42:59.056535392 +0000 UTC m=+314.912463430" lastFinishedPulling="2025-09-29 09:43:00.905821938 +0000 UTC m=+316.761749966" observedRunningTime="2025-09-29 09:43:01.144300443 +0000 UTC m=+317.000228481" watchObservedRunningTime="2025-09-29 09:43:01.149838424 +0000 UTC m=+317.005766462" Sep 29 09:43:01 crc kubenswrapper[4991]: I0929 09:43:01.186559 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" podStartSLOduration=1.544309591 podStartE2EDuration="4.186545993s" podCreationTimestamp="2025-09-29 09:42:57 +0000 UTC" firstStartedPulling="2025-09-29 09:42:58.257298515 +0000 UTC m=+314.113226543" lastFinishedPulling="2025-09-29 09:43:00.899534917 +0000 UTC m=+316.755462945" observedRunningTime="2025-09-29 09:43:01.167759292 +0000 UTC m=+317.023687320" watchObservedRunningTime="2025-09-29 09:43:01.186545993 +0000 UTC m=+317.042474021" Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.143899 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"65827510d626ed6ae25c33d2f31d980a5b525d96d2e4c717b75d9003e81a8eef"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.144281 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"2b06e64b47dfd1dccecadf4334e4026dda2b377d92e406a77dd1044d5c1966cf"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.144293 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"55f59d90265d38ef58a72e0f4a17534611b7082bef1ebb5cfba4333a6a3b5e19"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.144308 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"bf66e7b7d610ab80b0f10062b3e5f0cd95986b31ca475393b39a7bcd377c5f75"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.144326 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"66cc7757f402646f003cd19890328e62c9d7abcd12100066a32c019b86ff3ce1"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.144334 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ab92a233-e04e-4547-aaac-1696efa06288","Type":"ContainerStarted","Data":"98ae46f19ff7908aa72b5b7dfb934ad2a15ac4aa3ed252089e57a389619e2563"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.145992 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-85769fb6d7-6szl6" event={"ID":"07590400-e477-42bd-bc2e-35c82852ce0b","Type":"ContainerStarted","Data":"989b562724f69c58849d2c0d6786c67be9a76bac8815ad82e65e1475df268ed3"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.148807 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" event={"ID":"bd5e3f97-ca66-4895-84e1-2e147221371c","Type":"ContainerStarted","Data":"eef4e260435f779f90f8fa233dc32e29767292afca3eeb46e3d1fbab081f493e"} Sep 29 09:43:02 crc kubenswrapper[4991]: I0929 09:43:02.177523 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.849583097 podStartE2EDuration="9.177478189s" podCreationTimestamp="2025-09-29 09:42:53 +0000 UTC" firstStartedPulling="2025-09-29 09:42:54.573635056 +0000 UTC m=+310.429563084" lastFinishedPulling="2025-09-29 09:43:00.901530148 +0000 UTC m=+316.757458176" observedRunningTime="2025-09-29 09:43:02.169188987 +0000 UTC m=+318.025117025" watchObservedRunningTime="2025-09-29 09:43:02.177478189 +0000 UTC m=+318.033406247" Sep 29 09:43:03 crc kubenswrapper[4991]: I0929 09:43:03.030528 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-b2s5r" Sep 29 09:43:03 crc kubenswrapper[4991]: I0929 09:43:03.100201 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:43:04 crc kubenswrapper[4991]: I0929 09:43:04.168234 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"c2fdaa231fd5981bb7682023b4046930fb14bebc73116235a6ebc6338f93de85"} Sep 29 09:43:04 crc kubenswrapper[4991]: I0929 09:43:04.570380 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-c69dcc589-8n86l" Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.181073 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"b4a87f8b3fb7e0e1a3b63fc470981d1576a6a225d34012eb39f44277c151a04d"} Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.181142 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"58b142b39ef54fc3d3d4b56729d8a2d54c4bb081e573300547ad52d962e56bad"} Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.181158 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"3dffbd3581e27663f681f466d6af765e76c8008449e281ecd2e9e4e16c3c4679"} Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.181171 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"851115a9500e7d92d105a813866fb040b1f843a28353b2d716ed6d50d539eae6"} Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.181182 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"00121deb-d371-4086-8a93-74d8a3716533","Type":"ContainerStarted","Data":"a42be3b54ba6a0299e28d58357ba711b5e1a973f3b12aeaeeba40796127ea547"} Sep 29 09:43:05 crc kubenswrapper[4991]: I0929 09:43:05.227519 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.345094065 podStartE2EDuration="7.227485063s" podCreationTimestamp="2025-09-29 09:42:58 +0000 UTC" firstStartedPulling="2025-09-29 09:43:00.084806223 +0000 UTC m=+315.940734261" lastFinishedPulling="2025-09-29 09:43:03.967197221 +0000 UTC m=+319.823125259" observedRunningTime="2025-09-29 09:43:05.221265854 +0000 UTC m=+321.077193912" watchObservedRunningTime="2025-09-29 09:43:05.227485063 +0000 UTC m=+321.083413131" Sep 29 09:43:07 crc kubenswrapper[4991]: I0929 09:43:07.424905 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:43:07 crc kubenswrapper[4991]: I0929 09:43:07.425365 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:43:07 crc kubenswrapper[4991]: I0929 09:43:07.433098 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:43:08 crc kubenswrapper[4991]: I0929 09:43:08.211938 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:43:08 crc kubenswrapper[4991]: I0929 09:43:08.280597 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:43:09 crc kubenswrapper[4991]: I0929 09:43:09.004613 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:43:17 crc kubenswrapper[4991]: I0929 09:43:17.948988 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:43:17 crc kubenswrapper[4991]: I0929 09:43:17.949527 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.157126 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" podUID="980e8489-6002-47dd-b6c4-1be37ee0bad9" containerName="registry" containerID="cri-o://5c37ce28d04eb77e38300c56c4711d9d49be0666111cb426005cd0bbd6a19549" gracePeriod=30 Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.404262 4991 generic.go:334] "Generic (PLEG): container finished" podID="980e8489-6002-47dd-b6c4-1be37ee0bad9" containerID="5c37ce28d04eb77e38300c56c4711d9d49be0666111cb426005cd0bbd6a19549" exitCode=0 Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.404365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" event={"ID":"980e8489-6002-47dd-b6c4-1be37ee0bad9","Type":"ContainerDied","Data":"5c37ce28d04eb77e38300c56c4711d9d49be0666111cb426005cd0bbd6a19549"} Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.554877 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.653850 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.653895 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654178 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654214 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654239 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654270 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654330 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.654409 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklk2\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2\") pod \"980e8489-6002-47dd-b6c4-1be37ee0bad9\" (UID: \"980e8489-6002-47dd-b6c4-1be37ee0bad9\") " Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.655879 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.656132 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.663047 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2" (OuterVolumeSpecName: "kube-api-access-pklk2") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "kube-api-access-pklk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.666535 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.695046 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.700339 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.711463 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.714055 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "980e8489-6002-47dd-b6c4-1be37ee0bad9" (UID: "980e8489-6002-47dd-b6c4-1be37ee0bad9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755711 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklk2\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-kube-api-access-pklk2\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755752 4991 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755766 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755774 4991 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/980e8489-6002-47dd-b6c4-1be37ee0bad9-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755783 4991 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/980e8489-6002-47dd-b6c4-1be37ee0bad9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755792 4991 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/980e8489-6002-47dd-b6c4-1be37ee0bad9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:28 crc kubenswrapper[4991]: I0929 09:43:28.755799 4991 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/980e8489-6002-47dd-b6c4-1be37ee0bad9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:29 crc kubenswrapper[4991]: I0929 09:43:29.414515 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" event={"ID":"980e8489-6002-47dd-b6c4-1be37ee0bad9","Type":"ContainerDied","Data":"1f4403c4f96055b203c15740afa0202c71581ffd2c8e7c42f68116678cb9e810"} Sep 29 09:43:29 crc kubenswrapper[4991]: I0929 09:43:29.414607 4991 scope.go:117] "RemoveContainer" containerID="5c37ce28d04eb77e38300c56c4711d9d49be0666111cb426005cd0bbd6a19549" Sep 29 09:43:29 crc kubenswrapper[4991]: I0929 09:43:29.414657 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6g68t" Sep 29 09:43:29 crc kubenswrapper[4991]: I0929 09:43:29.446246 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:43:29 crc kubenswrapper[4991]: I0929 09:43:29.457778 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6g68t"] Sep 29 09:43:30 crc kubenswrapper[4991]: I0929 09:43:30.937595 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980e8489-6002-47dd-b6c4-1be37ee0bad9" path="/var/lib/kubelet/pods/980e8489-6002-47dd-b6c4-1be37ee0bad9/volumes" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.335651 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-62rs2" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" containerID="cri-o://caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29" gracePeriod=15 Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.860568 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-62rs2_011674e5-30e1-4632-aee7-d0dbb06e6824/console/0.log" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.861009 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.940706 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.940807 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.940869 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb79g\" (UniqueName: \"kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.941988 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.942132 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.942116 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.942283 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.942393 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.942510 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca\") pod \"011674e5-30e1-4632-aee7-d0dbb06e6824\" (UID: \"011674e5-30e1-4632-aee7-d0dbb06e6824\") " Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.943359 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.943420 4991 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.943355 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca" (OuterVolumeSpecName: "service-ca") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.944115 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config" (OuterVolumeSpecName: "console-config") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.948337 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.948640 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g" (OuterVolumeSpecName: "kube-api-access-rb79g") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "kube-api-access-rb79g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:43:33 crc kubenswrapper[4991]: I0929 09:43:33.954448 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "011674e5-30e1-4632-aee7-d0dbb06e6824" (UID: "011674e5-30e1-4632-aee7-d0dbb06e6824"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.045671 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb79g\" (UniqueName: \"kubernetes.io/projected/011674e5-30e1-4632-aee7-d0dbb06e6824-kube-api-access-rb79g\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.045721 4991 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.045742 4991 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/011674e5-30e1-4632-aee7-d0dbb06e6824-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.045761 4991 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-console-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.045780 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/011674e5-30e1-4632-aee7-d0dbb06e6824-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.463776 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-62rs2_011674e5-30e1-4632-aee7-d0dbb06e6824/console/0.log" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.463842 4991 generic.go:334] "Generic (PLEG): container finished" podID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerID="caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29" exitCode=2 Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.463878 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-62rs2" event={"ID":"011674e5-30e1-4632-aee7-d0dbb06e6824","Type":"ContainerDied","Data":"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29"} Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.463986 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-62rs2" event={"ID":"011674e5-30e1-4632-aee7-d0dbb06e6824","Type":"ContainerDied","Data":"d659fb45b37c8e147e481f1da0f937520f3514c071d130486b2c751ea2fbcaf1"} Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.463999 4991 scope.go:117] "RemoveContainer" containerID="caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.464004 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-62rs2" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.494335 4991 scope.go:117] "RemoveContainer" containerID="caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29" Sep 29 09:43:34 crc kubenswrapper[4991]: E0929 09:43:34.495057 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29\": container with ID starting with caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29 not found: ID does not exist" containerID="caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.495119 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29"} err="failed to get container status \"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29\": rpc error: code = NotFound desc = could not find container \"caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29\": container with ID starting with caf3208614b11e32b9e2684fb840ea624c4bb80f584eb20f0a2cce8a90513f29 not found: ID does not exist" Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.530740 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.542255 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-62rs2"] Sep 29 09:43:34 crc kubenswrapper[4991]: I0929 09:43:34.939456 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" path="/var/lib/kubelet/pods/011674e5-30e1-4632-aee7-d0dbb06e6824/volumes" Sep 29 09:43:37 crc kubenswrapper[4991]: I0929 09:43:37.946898 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:43:37 crc kubenswrapper[4991]: I0929 09:43:37.947016 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:43:37 crc kubenswrapper[4991]: I0929 09:43:37.960407 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:43:37 crc kubenswrapper[4991]: I0929 09:43:37.966984 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7bfdd9cc49-snqkt" Sep 29 09:43:59 crc kubenswrapper[4991]: I0929 09:43:59.004803 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:43:59 crc kubenswrapper[4991]: I0929 09:43:59.093369 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:43:59 crc kubenswrapper[4991]: I0929 09:43:59.704415 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Sep 29 09:44:07 crc kubenswrapper[4991]: I0929 09:44:07.947250 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:44:07 crc kubenswrapper[4991]: I0929 09:44:07.947829 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.652930 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:44:22 crc kubenswrapper[4991]: E0929 09:44:22.656094 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980e8489-6002-47dd-b6c4-1be37ee0bad9" containerName="registry" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.656358 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="980e8489-6002-47dd-b6c4-1be37ee0bad9" containerName="registry" Sep 29 09:44:22 crc kubenswrapper[4991]: E0929 09:44:22.656561 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.656747 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.657253 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="011674e5-30e1-4632-aee7-d0dbb06e6824" containerName="console" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.657478 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="980e8489-6002-47dd-b6c4-1be37ee0bad9" containerName="registry" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.658534 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.670558 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.800031 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.800411 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdd9\" (UniqueName: \"kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.800573 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.800773 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.800947 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.801163 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.801347 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.902690 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903071 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903153 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903198 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903256 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jdd9\" (UniqueName: \"kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903293 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.903370 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.904542 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.904559 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.904808 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.904911 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.919128 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.919436 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:22 crc kubenswrapper[4991]: I0929 09:44:22.925444 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jdd9\" (UniqueName: \"kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9\") pod \"console-74fb6f4f4b-xpwzn\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:23 crc kubenswrapper[4991]: I0929 09:44:23.002887 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:23 crc kubenswrapper[4991]: I0929 09:44:23.275766 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:44:23 crc kubenswrapper[4991]: I0929 09:44:23.838622 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fb6f4f4b-xpwzn" event={"ID":"c43f31f9-b468-4b09-ad32-ae144ae4cd8e","Type":"ContainerStarted","Data":"a9b1b6201d74d1f7bef6ee718a68321ef101bd3fe4cb394b63c6b5c01e627a2c"} Sep 29 09:44:23 crc kubenswrapper[4991]: I0929 09:44:23.838688 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fb6f4f4b-xpwzn" event={"ID":"c43f31f9-b468-4b09-ad32-ae144ae4cd8e","Type":"ContainerStarted","Data":"06ca328f79243b453ddb13a769315778d8975696c452dcf4776cf450af05f26b"} Sep 29 09:44:23 crc kubenswrapper[4991]: I0929 09:44:23.864302 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74fb6f4f4b-xpwzn" podStartSLOduration=1.8642254010000001 podStartE2EDuration="1.864225401s" podCreationTimestamp="2025-09-29 09:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:44:23.860252705 +0000 UTC m=+399.716180773" watchObservedRunningTime="2025-09-29 09:44:23.864225401 +0000 UTC m=+399.720153479" Sep 29 09:44:33 crc kubenswrapper[4991]: I0929 09:44:33.003569 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:33 crc kubenswrapper[4991]: I0929 09:44:33.004458 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:33 crc kubenswrapper[4991]: I0929 09:44:33.012438 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:33 crc kubenswrapper[4991]: I0929 09:44:33.918630 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:44:34 crc kubenswrapper[4991]: I0929 09:44:34.022990 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:44:37 crc kubenswrapper[4991]: I0929 09:44:37.946471 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:44:37 crc kubenswrapper[4991]: I0929 09:44:37.947167 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:44:37 crc kubenswrapper[4991]: I0929 09:44:37.947230 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:44:37 crc kubenswrapper[4991]: I0929 09:44:37.948141 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:44:37 crc kubenswrapper[4991]: I0929 09:44:37.948261 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9" gracePeriod=600 Sep 29 09:44:38 crc kubenswrapper[4991]: I0929 09:44:38.957248 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9" exitCode=0 Sep 29 09:44:38 crc kubenswrapper[4991]: I0929 09:44:38.957349 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9"} Sep 29 09:44:38 crc kubenswrapper[4991]: I0929 09:44:38.957622 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179"} Sep 29 09:44:38 crc kubenswrapper[4991]: I0929 09:44:38.957648 4991 scope.go:117] "RemoveContainer" containerID="82e56550d803f864ec2b9c2064334cefdc7add16b257122728c69b97e77efd40" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.074489 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-595bf5d57c-mhhb4" podUID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" containerName="console" containerID="cri-o://3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220" gracePeriod=15 Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.505689 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-595bf5d57c-mhhb4_e90cc054-ad78-4ef4-8b02-a3ed14f14d87/console/0.log" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.506075 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.667927 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668021 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5j5c\" (UniqueName: \"kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668100 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668159 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668201 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668282 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.668320 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert\") pod \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\" (UID: \"e90cc054-ad78-4ef4-8b02-a3ed14f14d87\") " Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.669681 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config" (OuterVolumeSpecName: "console-config") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.669748 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.669904 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.670102 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca" (OuterVolumeSpecName: "service-ca") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.680234 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.680263 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c" (OuterVolumeSpecName: "kube-api-access-p5j5c") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "kube-api-access-p5j5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.680266 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e90cc054-ad78-4ef4-8b02-a3ed14f14d87" (UID: "e90cc054-ad78-4ef4-8b02-a3ed14f14d87"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770688 4991 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770727 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5j5c\" (UniqueName: \"kubernetes.io/projected/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-kube-api-access-p5j5c\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770742 4991 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770755 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770766 4991 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770776 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:44:59 crc kubenswrapper[4991]: I0929 09:44:59.770788 4991 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e90cc054-ad78-4ef4-8b02-a3ed14f14d87-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149220 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-595bf5d57c-mhhb4_e90cc054-ad78-4ef4-8b02-a3ed14f14d87/console/0.log" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149328 4991 generic.go:334] "Generic (PLEG): container finished" podID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" containerID="3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220" exitCode=2 Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149390 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-mhhb4" event={"ID":"e90cc054-ad78-4ef4-8b02-a3ed14f14d87","Type":"ContainerDied","Data":"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220"} Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149426 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-mhhb4" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149457 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-mhhb4" event={"ID":"e90cc054-ad78-4ef4-8b02-a3ed14f14d87","Type":"ContainerDied","Data":"9715de35e1eab4ad14b87ceff500c70dc5eb6a6afde6c05e94fda46f45eb12f9"} Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.149517 4991 scope.go:117] "RemoveContainer" containerID="3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.156435 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc"] Sep 29 09:45:00 crc kubenswrapper[4991]: E0929 09:45:00.157007 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" containerName="console" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.157031 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" containerName="console" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.157199 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" containerName="console" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.157654 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.160138 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.160391 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.174408 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc"] Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.196875 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.201231 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-595bf5d57c-mhhb4"] Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.203217 4991 scope.go:117] "RemoveContainer" containerID="3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220" Sep 29 09:45:00 crc kubenswrapper[4991]: E0929 09:45:00.203588 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220\": container with ID starting with 3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220 not found: ID does not exist" containerID="3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.203624 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220"} err="failed to get container status \"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220\": rpc error: code = NotFound desc = could not find container \"3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220\": container with ID starting with 3434cca31b08b5441fa44b7fb88e2394f4b53aa3a211b018999cda4f0a34f220 not found: ID does not exist" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.278852 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.278943 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.279126 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kqxw\" (UniqueName: \"kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.380594 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.380651 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.380702 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kqxw\" (UniqueName: \"kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.381555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.385902 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.401887 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kqxw\" (UniqueName: \"kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw\") pod \"collect-profiles-29318985-gwtmc\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.500779 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.757319 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc"] Sep 29 09:45:00 crc kubenswrapper[4991]: I0929 09:45:00.935817 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90cc054-ad78-4ef4-8b02-a3ed14f14d87" path="/var/lib/kubelet/pods/e90cc054-ad78-4ef4-8b02-a3ed14f14d87/volumes" Sep 29 09:45:01 crc kubenswrapper[4991]: I0929 09:45:01.158640 4991 generic.go:334] "Generic (PLEG): container finished" podID="bf87699d-b4ef-43ee-a37c-bdf593a7fd24" containerID="87345c181673e5a718adfb85afa66e6c1373ee82fd5a9ef4d6927309fffab928" exitCode=0 Sep 29 09:45:01 crc kubenswrapper[4991]: I0929 09:45:01.158735 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" event={"ID":"bf87699d-b4ef-43ee-a37c-bdf593a7fd24","Type":"ContainerDied","Data":"87345c181673e5a718adfb85afa66e6c1373ee82fd5a9ef4d6927309fffab928"} Sep 29 09:45:01 crc kubenswrapper[4991]: I0929 09:45:01.158766 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" event={"ID":"bf87699d-b4ef-43ee-a37c-bdf593a7fd24","Type":"ContainerStarted","Data":"cb94e67ba5f12db348ea6abacf0f9269e8e6343d7683baa151e33be094abe222"} Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.476236 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.620240 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume\") pod \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.620383 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kqxw\" (UniqueName: \"kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw\") pod \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.620461 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume\") pod \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\" (UID: \"bf87699d-b4ef-43ee-a37c-bdf593a7fd24\") " Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.622703 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf87699d-b4ef-43ee-a37c-bdf593a7fd24" (UID: "bf87699d-b4ef-43ee-a37c-bdf593a7fd24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.634168 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf87699d-b4ef-43ee-a37c-bdf593a7fd24" (UID: "bf87699d-b4ef-43ee-a37c-bdf593a7fd24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.636888 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw" (OuterVolumeSpecName: "kube-api-access-8kqxw") pod "bf87699d-b4ef-43ee-a37c-bdf593a7fd24" (UID: "bf87699d-b4ef-43ee-a37c-bdf593a7fd24"). InnerVolumeSpecName "kube-api-access-8kqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.721885 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.721941 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kqxw\" (UniqueName: \"kubernetes.io/projected/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-kube-api-access-8kqxw\") on node \"crc\" DevicePath \"\"" Sep 29 09:45:02 crc kubenswrapper[4991]: I0929 09:45:02.721977 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf87699d-b4ef-43ee-a37c-bdf593a7fd24-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 09:45:03 crc kubenswrapper[4991]: I0929 09:45:03.178012 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" event={"ID":"bf87699d-b4ef-43ee-a37c-bdf593a7fd24","Type":"ContainerDied","Data":"cb94e67ba5f12db348ea6abacf0f9269e8e6343d7683baa151e33be094abe222"} Sep 29 09:45:03 crc kubenswrapper[4991]: I0929 09:45:03.178071 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb94e67ba5f12db348ea6abacf0f9269e8e6343d7683baa151e33be094abe222" Sep 29 09:45:03 crc kubenswrapper[4991]: I0929 09:45:03.178187 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc" Sep 29 09:47:07 crc kubenswrapper[4991]: I0929 09:47:07.947476 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:47:07 crc kubenswrapper[4991]: I0929 09:47:07.948075 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.505736 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6"] Sep 29 09:47:30 crc kubenswrapper[4991]: E0929 09:47:30.506903 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf87699d-b4ef-43ee-a37c-bdf593a7fd24" containerName="collect-profiles" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.506919 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf87699d-b4ef-43ee-a37c-bdf593a7fd24" containerName="collect-profiles" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.507103 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf87699d-b4ef-43ee-a37c-bdf593a7fd24" containerName="collect-profiles" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.508278 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.510851 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.521857 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6"] Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.611680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.612339 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztgvv\" (UniqueName: \"kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.612633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.715476 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.715608 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.715699 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztgvv\" (UniqueName: \"kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.716609 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.716666 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.733504 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztgvv\" (UniqueName: \"kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:30 crc kubenswrapper[4991]: I0929 09:47:30.885484 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:31 crc kubenswrapper[4991]: I0929 09:47:31.176183 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6"] Sep 29 09:47:31 crc kubenswrapper[4991]: I0929 09:47:31.286473 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" event={"ID":"f07ef33b-dcdb-4706-ab42-e881185cea81","Type":"ContainerStarted","Data":"868e6176cf9a279f0fca841e9cc041831da902c1f58a4545a4f9a82685b99776"} Sep 29 09:47:31 crc kubenswrapper[4991]: E0929 09:47:31.767423 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf07ef33b_dcdb_4706_ab42_e881185cea81.slice/crio-conmon-222bbfeb9296582b122e8f3c9b2663f71f74147ac0cbfeff2c33e1f516e105ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf07ef33b_dcdb_4706_ab42_e881185cea81.slice/crio-222bbfeb9296582b122e8f3c9b2663f71f74147ac0cbfeff2c33e1f516e105ab.scope\": RecentStats: unable to find data in memory cache]" Sep 29 09:47:32 crc kubenswrapper[4991]: I0929 09:47:32.295521 4991 generic.go:334] "Generic (PLEG): container finished" podID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerID="222bbfeb9296582b122e8f3c9b2663f71f74147ac0cbfeff2c33e1f516e105ab" exitCode=0 Sep 29 09:47:32 crc kubenswrapper[4991]: I0929 09:47:32.295589 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" event={"ID":"f07ef33b-dcdb-4706-ab42-e881185cea81","Type":"ContainerDied","Data":"222bbfeb9296582b122e8f3c9b2663f71f74147ac0cbfeff2c33e1f516e105ab"} Sep 29 09:47:32 crc kubenswrapper[4991]: I0929 09:47:32.297890 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 09:47:35 crc kubenswrapper[4991]: I0929 09:47:35.317708 4991 generic.go:334] "Generic (PLEG): container finished" podID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerID="58122ca30342746198668b1c34b7638b606b1113d2433f4da8a350eaee5bd688" exitCode=0 Sep 29 09:47:35 crc kubenswrapper[4991]: I0929 09:47:35.318219 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" event={"ID":"f07ef33b-dcdb-4706-ab42-e881185cea81","Type":"ContainerDied","Data":"58122ca30342746198668b1c34b7638b606b1113d2433f4da8a350eaee5bd688"} Sep 29 09:47:36 crc kubenswrapper[4991]: I0929 09:47:36.328577 4991 generic.go:334] "Generic (PLEG): container finished" podID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerID="50f5a525859dcd597d4670ded94ce5e409417fb07ac283d048d58f811e443805" exitCode=0 Sep 29 09:47:36 crc kubenswrapper[4991]: I0929 09:47:36.328698 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" event={"ID":"f07ef33b-dcdb-4706-ab42-e881185cea81","Type":"ContainerDied","Data":"50f5a525859dcd597d4670ded94ce5e409417fb07ac283d048d58f811e443805"} Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.662429 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.738493 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util\") pod \"f07ef33b-dcdb-4706-ab42-e881185cea81\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.738564 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle\") pod \"f07ef33b-dcdb-4706-ab42-e881185cea81\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.738638 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztgvv\" (UniqueName: \"kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv\") pod \"f07ef33b-dcdb-4706-ab42-e881185cea81\" (UID: \"f07ef33b-dcdb-4706-ab42-e881185cea81\") " Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.741210 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle" (OuterVolumeSpecName: "bundle") pod "f07ef33b-dcdb-4706-ab42-e881185cea81" (UID: "f07ef33b-dcdb-4706-ab42-e881185cea81"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.748503 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv" (OuterVolumeSpecName: "kube-api-access-ztgvv") pod "f07ef33b-dcdb-4706-ab42-e881185cea81" (UID: "f07ef33b-dcdb-4706-ab42-e881185cea81"). InnerVolumeSpecName "kube-api-access-ztgvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.758750 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util" (OuterVolumeSpecName: "util") pod "f07ef33b-dcdb-4706-ab42-e881185cea81" (UID: "f07ef33b-dcdb-4706-ab42-e881185cea81"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.840657 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztgvv\" (UniqueName: \"kubernetes.io/projected/f07ef33b-dcdb-4706-ab42-e881185cea81-kube-api-access-ztgvv\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.840690 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.840702 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f07ef33b-dcdb-4706-ab42-e881185cea81-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.947521 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:47:37 crc kubenswrapper[4991]: I0929 09:47:37.947686 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:47:38 crc kubenswrapper[4991]: I0929 09:47:38.345915 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" event={"ID":"f07ef33b-dcdb-4706-ab42-e881185cea81","Type":"ContainerDied","Data":"868e6176cf9a279f0fca841e9cc041831da902c1f58a4545a4f9a82685b99776"} Sep 29 09:47:38 crc kubenswrapper[4991]: I0929 09:47:38.345976 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868e6176cf9a279f0fca841e9cc041831da902c1f58a4545a4f9a82685b99776" Sep 29 09:47:38 crc kubenswrapper[4991]: I0929 09:47:38.346035 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.152421 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h4hm4"] Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153170 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-controller" containerID="cri-o://abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153343 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="sbdb" containerID="cri-o://210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153411 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="nbdb" containerID="cri-o://45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153440 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-node" containerID="cri-o://ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153462 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-acl-logging" containerID="cri-o://37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153604 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="northd" containerID="cri-o://95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.153641 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.211594 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" containerID="cri-o://156a735b6aa5d10b99e6e4177ae054bed904a9b1c86784ae54a887fa808701b2" gracePeriod=30 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.373548 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovnkube-controller/3.log" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.375369 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-acl-logging/0.log" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.375770 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-controller/0.log" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376068 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="156a735b6aa5d10b99e6e4177ae054bed904a9b1c86784ae54a887fa808701b2" exitCode=0 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376087 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188" exitCode=143 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376096 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77" exitCode=143 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376138 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"156a735b6aa5d10b99e6e4177ae054bed904a9b1c86784ae54a887fa808701b2"} Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376165 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188"} Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376175 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77"} Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.376197 4991 scope.go:117] "RemoveContainer" containerID="ce2f8d15feab9327b16d994783fdfafd8a8eb66e985bcf1aa1306160b78390fa" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.378153 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/2.log" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.385278 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/1.log" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.385327 4991 generic.go:334] "Generic (PLEG): container finished" podID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" containerID="216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f" exitCode=2 Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.385355 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerDied","Data":"216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f"} Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.386030 4991 scope.go:117] "RemoveContainer" containerID="216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f" Sep 29 09:47:42 crc kubenswrapper[4991]: E0929 09:47:42.386251 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mm67g_openshift-multus(f36a89bf-ee7b-4bf7-bc61-9ea099661bd1)\"" pod="openshift-multus/multus-mm67g" podUID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" Sep 29 09:47:42 crc kubenswrapper[4991]: I0929 09:47:42.416684 4991 scope.go:117] "RemoveContainer" containerID="93b598d670e53dad1dc4fbce312b96e5b3135101a1dd6b1b7440d8e70da068f0" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.393283 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/2.log" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.397569 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-acl-logging/0.log" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398003 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-controller/0.log" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398319 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7" exitCode=0 Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398344 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e" exitCode=0 Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398353 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc" exitCode=0 Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398361 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc" exitCode=0 Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398369 4991 generic.go:334] "Generic (PLEG): container finished" podID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerID="ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd" exitCode=0 Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398388 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7"} Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398438 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e"} Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398452 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc"} Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398465 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc"} Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.398476 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd"} Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.852375 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-acl-logging/0.log" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.852748 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-controller/0.log" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.853072 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912467 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mcw5r"] Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912738 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-acl-logging" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912759 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-acl-logging" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912771 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912779 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912787 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912794 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912807 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="extract" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912814 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="extract" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912826 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-node" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912834 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-node" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912844 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="nbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912852 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="nbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912862 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912868 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912877 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="northd" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912884 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="northd" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912895 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912902 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912911 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="util" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912918 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="util" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912929 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kubecfg-setup" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912936 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kubecfg-setup" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912968 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-ovn-metrics" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912977 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-ovn-metrics" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.912985 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="pull" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.912991 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="pull" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.913000 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="sbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913007 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="sbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913119 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913130 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-ovn-metrics" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913140 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="sbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913149 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-acl-logging" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913159 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913167 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07ef33b-dcdb-4706-ab42-e881185cea81" containerName="extract" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913178 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913188 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913199 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="nbdb" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913206 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="kube-rbac-proxy-node" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913219 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="northd" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913227 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovn-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.913341 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913350 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913463 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: E0929 09:47:43.913615 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.913625 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" containerName="ovnkube-controller" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.915637 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932779 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932831 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932859 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932891 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932922 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.932939 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933046 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933077 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933097 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933129 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tp2\" (UniqueName: \"kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933148 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933174 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933194 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933224 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933242 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933258 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933277 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933297 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933315 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933332 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides\") pod \"14c96fef-6218-4f25-8f81-f7adc934b0d5\" (UID: \"14c96fef-6218-4f25-8f81-f7adc934b0d5\") " Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.933917 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934180 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934215 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934229 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934267 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash" (OuterVolumeSpecName: "host-slash") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934272 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934249 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934252 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket" (OuterVolumeSpecName: "log-socket") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934294 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934299 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934316 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934321 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934328 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934344 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log" (OuterVolumeSpecName: "node-log") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934351 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934361 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.934513 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.939683 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.939776 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2" (OuterVolumeSpecName: "kube-api-access-78tp2") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "kube-api-access-78tp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:47:43 crc kubenswrapper[4991]: I0929 09:47:43.955431 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "14c96fef-6218-4f25-8f81-f7adc934b0d5" (UID: "14c96fef-6218-4f25-8f81-f7adc934b0d5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.035241 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-etc-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.035303 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-slash\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.035369 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.035901 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-config\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.035947 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-systemd-units\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036005 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-var-lib-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036025 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-node-log\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036053 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-netd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036116 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036134 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4ks4\" (UniqueName: \"kubernetes.io/projected/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-kube-api-access-t4ks4\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036166 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-ovn\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036197 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-bin\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036216 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovn-node-metrics-cert\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036264 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-systemd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036300 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-env-overrides\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036760 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-kubelet\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.036938 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-script-lib\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037078 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-log-socket\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037164 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037240 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-netns\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037356 4991 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-log-socket\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037375 4991 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037389 4991 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037404 4991 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-systemd-units\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037418 4991 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-slash\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037428 4991 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037440 4991 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037452 4991 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037463 4991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037474 4991 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037755 4991 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-systemd\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037774 4991 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037784 4991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/14c96fef-6218-4f25-8f81-f7adc934b0d5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037797 4991 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-node-log\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037809 4991 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037818 4991 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-netns\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037826 4991 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037838 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78tp2\" (UniqueName: \"kubernetes.io/projected/14c96fef-6218-4f25-8f81-f7adc934b0d5-kube-api-access-78tp2\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037848 4991 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.037856 4991 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/14c96fef-6218-4f25-8f81-f7adc934b0d5-host-kubelet\") on node \"crc\" DevicePath \"\"" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138486 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-netd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138537 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138556 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4ks4\" (UniqueName: \"kubernetes.io/projected/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-kube-api-access-t4ks4\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138579 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-ovn\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138595 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-bin\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138610 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovn-node-metrics-cert\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138621 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-netd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138632 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-env-overrides\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138702 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-systemd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138731 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-kubelet\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138779 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-script-lib\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138823 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-log-socket\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138864 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138903 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-netns\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138937 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-etc-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138970 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-slash\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.138998 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139021 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-config\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139045 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-systemd-units\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139065 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-var-lib-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139083 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-node-log\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139154 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-node-log\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139171 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-env-overrides\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139192 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-kubelet\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139175 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-systemd\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139230 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139675 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-run-ovn\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139705 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-cni-bin\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139738 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-script-lib\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139774 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-log-socket\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139796 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139817 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-run-netns\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139837 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-etc-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139858 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-slash\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.139878 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.140017 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-systemd-units\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.140058 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-var-lib-openvswitch\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.140297 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovnkube-config\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.142767 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-ovn-node-metrics-cert\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.161464 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4ks4\" (UniqueName: \"kubernetes.io/projected/685bfbdd-1b8f-4568-8e56-0eb4e8612eee-kube-api-access-t4ks4\") pod \"ovnkube-node-mcw5r\" (UID: \"685bfbdd-1b8f-4568-8e56-0eb4e8612eee\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.229876 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.410655 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-acl-logging/0.log" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.411427 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h4hm4_14c96fef-6218-4f25-8f81-f7adc934b0d5/ovn-controller/0.log" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.411785 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" event={"ID":"14c96fef-6218-4f25-8f81-f7adc934b0d5","Type":"ContainerDied","Data":"a3a70f4e267b6417cbbcadce1c6a83e12aa013bc0d4b6b42815c06bcc3f06dc0"} Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.411877 4991 scope.go:117] "RemoveContainer" containerID="156a735b6aa5d10b99e6e4177ae054bed904a9b1c86784ae54a887fa808701b2" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.411820 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h4hm4" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.412870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"93798dca39cbb8bd2ee4b4b652727d3a79f5466568426416ea454ae8263cc8dc"} Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.425306 4991 scope.go:117] "RemoveContainer" containerID="210ea1e22527d06b8b226105931d16824c00a910c097f6371f8278b0fb0045d7" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.492969 4991 scope.go:117] "RemoveContainer" containerID="45dec787cf69fe0ca00fc322b989c8d32ddcc386ad387409e1311ad692405c9e" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.505776 4991 scope.go:117] "RemoveContainer" containerID="95f04fae4d0f078db0e355b135dc6fcb55b5fa641bac3736a9f3a66dc5b5c5bc" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.512004 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h4hm4"] Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.517856 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h4hm4"] Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.519399 4991 scope.go:117] "RemoveContainer" containerID="52a1e0afbc3bd9b36c01e80b8d3f57039fbe8e6a7ce1f9c8b16183ab048098dc" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.532022 4991 scope.go:117] "RemoveContainer" containerID="ca4d268d73a895612d6a23c22d0dc65ca428c9ec55a9aea8a67b506016f12abd" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.563479 4991 scope.go:117] "RemoveContainer" containerID="37e78776d4e866a0ddc5a872fcb2e58715914d23a9a80823a7891bb32238e188" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.575633 4991 scope.go:117] "RemoveContainer" containerID="abf8e8a2bd13340899f8e92030d66ddaf60e727b27615c1ccc6dfa2f659d3e77" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.608562 4991 scope.go:117] "RemoveContainer" containerID="4d304e231c351333c8cf7b577552ca0a18a53abda19696afc405f852d0d34880" Sep 29 09:47:44 crc kubenswrapper[4991]: I0929 09:47:44.933982 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c96fef-6218-4f25-8f81-f7adc934b0d5" path="/var/lib/kubelet/pods/14c96fef-6218-4f25-8f81-f7adc934b0d5/volumes" Sep 29 09:47:45 crc kubenswrapper[4991]: I0929 09:47:45.418479 4991 generic.go:334] "Generic (PLEG): container finished" podID="685bfbdd-1b8f-4568-8e56-0eb4e8612eee" containerID="bd6255888ad63f4f461df03568f50c7c77c19c92c28854856c027d51a622c3f4" exitCode=0 Sep 29 09:47:45 crc kubenswrapper[4991]: I0929 09:47:45.418532 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerDied","Data":"bd6255888ad63f4f461df03568f50c7c77c19c92c28854856c027d51a622c3f4"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"d9e2d8bb98abf2ee7b936f309d83d1000a0cf0b5acead06c5f8168a0cf833be9"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429352 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"362b8a458c485a79dd29d8bbf10ae0d759652be1c5306f2e4d1c21a36879e91f"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"27dc35f45b3ee0a39dd6bcd72d8bc7aa1c40d702eff6a809f0cba3e28a796a13"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429375 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"c663908aa79f232e0176f7fc314ec2ac1d1afeaa8e1f67ce006d7ecbe1acd4e4"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429385 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"955353b34e45427ac0d1be0fc192a8709cc851d16cceeb235e820c1161300f65"} Sep 29 09:47:46 crc kubenswrapper[4991]: I0929 09:47:46.429397 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"34273f026f91856585ac3de621656e901953a817a17169dcab374c5dd3815982"} Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.654679 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9"] Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.656171 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.658047 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.662300 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-54pfw" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.662645 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.787600 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz"] Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.788491 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.790736 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.798234 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rztzv" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.802677 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt"] Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.803526 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.806796 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2f6\" (UniqueName: \"kubernetes.io/projected/ed45932c-01e6-4a92-be25-ab5f613eb208-kube-api-access-st2f6\") pod \"obo-prometheus-operator-7c8cf85677-bscm9\" (UID: \"ed45932c-01e6-4a92-be25-ab5f613eb208\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.908417 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2f6\" (UniqueName: \"kubernetes.io/projected/ed45932c-01e6-4a92-be25-ab5f613eb208-kube-api-access-st2f6\") pod \"obo-prometheus-operator-7c8cf85677-bscm9\" (UID: \"ed45932c-01e6-4a92-be25-ab5f613eb208\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.908463 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.908492 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.908510 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.908524 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.927709 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2f6\" (UniqueName: \"kubernetes.io/projected/ed45932c-01e6-4a92-be25-ab5f613eb208-kube-api-access-st2f6\") pod \"obo-prometheus-operator-7c8cf85677-bscm9\" (UID: \"ed45932c-01e6-4a92-be25-ab5f613eb208\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.973827 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.975391 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-l6njq"] Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.976397 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.982760 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Sep 29 09:47:47 crc kubenswrapper[4991]: I0929 09:47:47.982979 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-425s5" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.001635 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(36bba52b59fa269fbbd70922b14ece9ed1d7ffce0d5fc90fc6269dad4a276462): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.001701 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(36bba52b59fa269fbbd70922b14ece9ed1d7ffce0d5fc90fc6269dad4a276462): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.001725 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(36bba52b59fa269fbbd70922b14ece9ed1d7ffce0d5fc90fc6269dad4a276462): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.001770 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(36bba52b59fa269fbbd70922b14ece9ed1d7ffce0d5fc90fc6269dad4a276462): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" podUID="ed45932c-01e6-4a92-be25-ab5f613eb208" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.009668 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.010440 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.010500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.010529 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.016486 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.026162 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38bf7687-8e95-4b5c-b346-381d71189652-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt\" (UID: \"38bf7687-8e95-4b5c-b346-381d71189652\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.030370 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.030728 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f46f29d1-1ea0-498a-9724-466a64601373-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz\" (UID: \"f46f29d1-1ea0-498a-9724-466a64601373\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.101825 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.111668 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58e0e494-e32c-4e1a-87b6-0d0dfafed095-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.111723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56bj\" (UniqueName: \"kubernetes.io/projected/58e0e494-e32c-4e1a-87b6-0d0dfafed095-kube-api-access-s56bj\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.116318 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.137782 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(6f5d96d4533b2d86a63f255cb9b1338f1ccab85f6f8f84a5e15fefaef7ec03dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.137860 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(6f5d96d4533b2d86a63f255cb9b1338f1ccab85f6f8f84a5e15fefaef7ec03dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.137883 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(6f5d96d4533b2d86a63f255cb9b1338f1ccab85f6f8f84a5e15fefaef7ec03dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.137938 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(6f5d96d4533b2d86a63f255cb9b1338f1ccab85f6f8f84a5e15fefaef7ec03dd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" podUID="f46f29d1-1ea0-498a-9724-466a64601373" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.159550 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(83091aeb1c7aee18685b41c2bcfb25c06db2551d4129c677c7b9b5b05b0d70e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.159627 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(83091aeb1c7aee18685b41c2bcfb25c06db2551d4129c677c7b9b5b05b0d70e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.159646 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(83091aeb1c7aee18685b41c2bcfb25c06db2551d4129c677c7b9b5b05b0d70e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.159688 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(83091aeb1c7aee18685b41c2bcfb25c06db2551d4129c677c7b9b5b05b0d70e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" podUID="38bf7687-8e95-4b5c-b346-381d71189652" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.193515 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-bpxmw"] Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.196563 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.199481 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-766mb" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.212991 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56bj\" (UniqueName: \"kubernetes.io/projected/58e0e494-e32c-4e1a-87b6-0d0dfafed095-kube-api-access-s56bj\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.213030 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58e0e494-e32c-4e1a-87b6-0d0dfafed095-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.217182 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/58e0e494-e32c-4e1a-87b6-0d0dfafed095-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.244729 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56bj\" (UniqueName: \"kubernetes.io/projected/58e0e494-e32c-4e1a-87b6-0d0dfafed095-kube-api-access-s56bj\") pod \"observability-operator-cc5f78dfc-l6njq\" (UID: \"58e0e494-e32c-4e1a-87b6-0d0dfafed095\") " pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.314539 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zkpl\" (UniqueName: \"kubernetes.io/projected/801d6262-1556-4622-bc1e-16dc54de57ca-kube-api-access-7zkpl\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.314845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/801d6262-1556-4622-bc1e-16dc54de57ca-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.370773 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.398708 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(f746a533a759d5187f202c7696194461d9bfd089f04366f831333d290c0b7b00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.398772 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(f746a533a759d5187f202c7696194461d9bfd089f04366f831333d290c0b7b00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.398793 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(f746a533a759d5187f202c7696194461d9bfd089f04366f831333d290c0b7b00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.398838 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(f746a533a759d5187f202c7696194461d9bfd089f04366f831333d290c0b7b00): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" podUID="58e0e494-e32c-4e1a-87b6-0d0dfafed095" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.416403 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zkpl\" (UniqueName: \"kubernetes.io/projected/801d6262-1556-4622-bc1e-16dc54de57ca-kube-api-access-7zkpl\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.416467 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/801d6262-1556-4622-bc1e-16dc54de57ca-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.417381 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/801d6262-1556-4622-bc1e-16dc54de57ca-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.440641 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zkpl\" (UniqueName: \"kubernetes.io/projected/801d6262-1556-4622-bc1e-16dc54de57ca-kube-api-access-7zkpl\") pod \"perses-operator-54bc95c9fb-bpxmw\" (UID: \"801d6262-1556-4622-bc1e-16dc54de57ca\") " pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.444895 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"712bf77793ae19a475f0e1b898dcf55ee0601d57d187ff7cd1c6ee1576b980ee"} Sep 29 09:47:48 crc kubenswrapper[4991]: I0929 09:47:48.525170 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.547740 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(431404b99f081eb4983e216a070f91cfc9245fbad59a1acb478880f5ff03f39c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.547810 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(431404b99f081eb4983e216a070f91cfc9245fbad59a1acb478880f5ff03f39c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.547834 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(431404b99f081eb4983e216a070f91cfc9245fbad59a1acb478880f5ff03f39c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:48 crc kubenswrapper[4991]: E0929 09:47:48.547879 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(431404b99f081eb4983e216a070f91cfc9245fbad59a1acb478880f5ff03f39c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" podUID="801d6262-1556-4622-bc1e-16dc54de57ca" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.467594 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" event={"ID":"685bfbdd-1b8f-4568-8e56-0eb4e8612eee","Type":"ContainerStarted","Data":"b64fce7c8dcad26e549c63688e62887233dbbb7b9f5e243bdad73ca89b9d9c0c"} Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.468169 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.468186 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.495627 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.498039 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" podStartSLOduration=8.498028129 podStartE2EDuration="8.498028129s" podCreationTimestamp="2025-09-29 09:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:47:51.49655146 +0000 UTC m=+607.352479488" watchObservedRunningTime="2025-09-29 09:47:51.498028129 +0000 UTC m=+607.353956157" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.602443 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9"] Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.602558 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.603032 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.607675 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-l6njq"] Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.607756 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.608079 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.636138 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt"] Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.636236 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.636634 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.656890 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(f6f32798ac43205aedf9a32f46a44a53836737b2f5651793f2f57ec65f7216b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.656987 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(f6f32798ac43205aedf9a32f46a44a53836737b2f5651793f2f57ec65f7216b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.657015 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(f6f32798ac43205aedf9a32f46a44a53836737b2f5651793f2f57ec65f7216b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.657073 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(f6f32798ac43205aedf9a32f46a44a53836737b2f5651793f2f57ec65f7216b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" podUID="ed45932c-01e6-4a92-be25-ab5f613eb208" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.664845 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(8d61a24dc560f620d7c1da93804f5ac284677afc966e5006056b73791c813931): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.664933 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(8d61a24dc560f620d7c1da93804f5ac284677afc966e5006056b73791c813931): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.665025 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(8d61a24dc560f620d7c1da93804f5ac284677afc966e5006056b73791c813931): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.665196 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(8d61a24dc560f620d7c1da93804f5ac284677afc966e5006056b73791c813931): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" podUID="58e0e494-e32c-4e1a-87b6-0d0dfafed095" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.667601 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(d2ab2fe96bf4a379aada02a1fd8b4a17fe2eeb6425e2aeb203f16e9d6d36a57e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.667667 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(d2ab2fe96bf4a379aada02a1fd8b4a17fe2eeb6425e2aeb203f16e9d6d36a57e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.667687 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(d2ab2fe96bf4a379aada02a1fd8b4a17fe2eeb6425e2aeb203f16e9d6d36a57e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.667734 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(d2ab2fe96bf4a379aada02a1fd8b4a17fe2eeb6425e2aeb203f16e9d6d36a57e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" podUID="38bf7687-8e95-4b5c-b346-381d71189652" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.690525 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz"] Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.690644 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.691092 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.710898 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-bpxmw"] Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.711020 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:51 crc kubenswrapper[4991]: I0929 09:47:51.711420 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.712357 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(20d90d17f5e1c4cb565e677fd6d1b81253c349f19732c25edaa9e00529a3fdfe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.712439 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(20d90d17f5e1c4cb565e677fd6d1b81253c349f19732c25edaa9e00529a3fdfe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.712548 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(20d90d17f5e1c4cb565e677fd6d1b81253c349f19732c25edaa9e00529a3fdfe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.712658 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(20d90d17f5e1c4cb565e677fd6d1b81253c349f19732c25edaa9e00529a3fdfe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" podUID="f46f29d1-1ea0-498a-9724-466a64601373" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.743813 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(64bc245eee21908f3704d98a09eee89a76b7f8b013a43233c9e7257b243f2487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.743892 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(64bc245eee21908f3704d98a09eee89a76b7f8b013a43233c9e7257b243f2487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.743925 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(64bc245eee21908f3704d98a09eee89a76b7f8b013a43233c9e7257b243f2487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:47:51 crc kubenswrapper[4991]: E0929 09:47:51.743987 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(64bc245eee21908f3704d98a09eee89a76b7f8b013a43233c9e7257b243f2487): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" podUID="801d6262-1556-4622-bc1e-16dc54de57ca" Sep 29 09:47:52 crc kubenswrapper[4991]: I0929 09:47:52.480801 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:52 crc kubenswrapper[4991]: I0929 09:47:52.515763 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:47:57 crc kubenswrapper[4991]: I0929 09:47:57.927041 4991 scope.go:117] "RemoveContainer" containerID="216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f" Sep 29 09:47:57 crc kubenswrapper[4991]: E0929 09:47:57.927671 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mm67g_openshift-multus(f36a89bf-ee7b-4bf7-bc61-9ea099661bd1)\"" pod="openshift-multus/multus-mm67g" podUID="f36a89bf-ee7b-4bf7-bc61-9ea099661bd1" Sep 29 09:48:01 crc kubenswrapper[4991]: I0929 09:48:01.925144 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:01 crc kubenswrapper[4991]: I0929 09:48:01.925893 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:01 crc kubenswrapper[4991]: E0929 09:48:01.954148 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(c3b66110f26834058e0a24dc429a2795e9c39c93d509df0f3b4d0a9d08e26836): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:48:01 crc kubenswrapper[4991]: E0929 09:48:01.954215 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(c3b66110f26834058e0a24dc429a2795e9c39c93d509df0f3b4d0a9d08e26836): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:01 crc kubenswrapper[4991]: E0929 09:48:01.954233 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(c3b66110f26834058e0a24dc429a2795e9c39c93d509df0f3b4d0a9d08e26836): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:01 crc kubenswrapper[4991]: E0929 09:48:01.954280 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators(f46f29d1-1ea0-498a-9724-466a64601373)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_openshift-operators_f46f29d1-1ea0-498a-9724-466a64601373_0(c3b66110f26834058e0a24dc429a2795e9c39c93d509df0f3b4d0a9d08e26836): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" podUID="f46f29d1-1ea0-498a-9724-466a64601373" Sep 29 09:48:04 crc kubenswrapper[4991]: I0929 09:48:04.932603 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:04 crc kubenswrapper[4991]: I0929 09:48:04.934026 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:04 crc kubenswrapper[4991]: E0929 09:48:04.969856 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(8ea540f006d9f38dccdcf8599e1ea7b80d86f8c836ae4892827ad69ca6482e33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:48:04 crc kubenswrapper[4991]: E0929 09:48:04.970292 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(8ea540f006d9f38dccdcf8599e1ea7b80d86f8c836ae4892827ad69ca6482e33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:04 crc kubenswrapper[4991]: E0929 09:48:04.970338 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(8ea540f006d9f38dccdcf8599e1ea7b80d86f8c836ae4892827ad69ca6482e33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:04 crc kubenswrapper[4991]: E0929 09:48:04.970405 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators(38bf7687-8e95-4b5c-b346-381d71189652)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_openshift-operators_38bf7687-8e95-4b5c-b346-381d71189652_0(8ea540f006d9f38dccdcf8599e1ea7b80d86f8c836ae4892827ad69ca6482e33): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" podUID="38bf7687-8e95-4b5c-b346-381d71189652" Sep 29 09:48:05 crc kubenswrapper[4991]: I0929 09:48:05.925612 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:05 crc kubenswrapper[4991]: I0929 09:48:05.926069 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:05 crc kubenswrapper[4991]: E0929 09:48:05.955219 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(c1b8343d9f505eac0225b6944b5b1639df5013d5972e45826e2cae303085a741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:48:05 crc kubenswrapper[4991]: E0929 09:48:05.955295 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(c1b8343d9f505eac0225b6944b5b1639df5013d5972e45826e2cae303085a741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:05 crc kubenswrapper[4991]: E0929 09:48:05.955316 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(c1b8343d9f505eac0225b6944b5b1639df5013d5972e45826e2cae303085a741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:05 crc kubenswrapper[4991]: E0929 09:48:05.955364 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators(ed45932c-01e6-4a92-be25-ab5f613eb208)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-bscm9_openshift-operators_ed45932c-01e6-4a92-be25-ab5f613eb208_0(c1b8343d9f505eac0225b6944b5b1639df5013d5972e45826e2cae303085a741): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" podUID="ed45932c-01e6-4a92-be25-ab5f613eb208" Sep 29 09:48:06 crc kubenswrapper[4991]: I0929 09:48:06.925911 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:06 crc kubenswrapper[4991]: I0929 09:48:06.926533 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:06 crc kubenswrapper[4991]: I0929 09:48:06.927118 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:06 crc kubenswrapper[4991]: I0929 09:48:06.927117 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.970666 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(029bec9d96a285d1f82f9143058f194b2460cd289db355aa63dc6b15d6cb153e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.970741 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(029bec9d96a285d1f82f9143058f194b2460cd289db355aa63dc6b15d6cb153e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.970768 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(029bec9d96a285d1f82f9143058f194b2460cd289db355aa63dc6b15d6cb153e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.970826 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-cc5f78dfc-l6njq_openshift-operators(58e0e494-e32c-4e1a-87b6-0d0dfafed095)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-l6njq_openshift-operators_58e0e494-e32c-4e1a-87b6-0d0dfafed095_0(029bec9d96a285d1f82f9143058f194b2460cd289db355aa63dc6b15d6cb153e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" podUID="58e0e494-e32c-4e1a-87b6-0d0dfafed095" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.972874 4991 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(bfa1c2e08a6b2010ce2425cfd9c1a9339b71870b7fd73154231919c5ed6ba13a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.972959 4991 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(bfa1c2e08a6b2010ce2425cfd9c1a9339b71870b7fd73154231919c5ed6ba13a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.972991 4991 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(bfa1c2e08a6b2010ce2425cfd9c1a9339b71870b7fd73154231919c5ed6ba13a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:06 crc kubenswrapper[4991]: E0929 09:48:06.973037 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-bpxmw_openshift-operators(801d6262-1556-4622-bc1e-16dc54de57ca)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-bpxmw_openshift-operators_801d6262-1556-4622-bc1e-16dc54de57ca_0(bfa1c2e08a6b2010ce2425cfd9c1a9339b71870b7fd73154231919c5ed6ba13a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" podUID="801d6262-1556-4622-bc1e-16dc54de57ca" Sep 29 09:48:07 crc kubenswrapper[4991]: I0929 09:48:07.946759 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:48:07 crc kubenswrapper[4991]: I0929 09:48:07.947072 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:48:07 crc kubenswrapper[4991]: I0929 09:48:07.947127 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:48:07 crc kubenswrapper[4991]: I0929 09:48:07.947786 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:48:07 crc kubenswrapper[4991]: I0929 09:48:07.947856 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179" gracePeriod=600 Sep 29 09:48:08 crc kubenswrapper[4991]: I0929 09:48:08.575995 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179" exitCode=0 Sep 29 09:48:08 crc kubenswrapper[4991]: I0929 09:48:08.576041 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179"} Sep 29 09:48:08 crc kubenswrapper[4991]: I0929 09:48:08.576275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c"} Sep 29 09:48:08 crc kubenswrapper[4991]: I0929 09:48:08.576299 4991 scope.go:117] "RemoveContainer" containerID="056391fc94cbbece8ac64c5877a12f6599370622ae65f3c7e775dc34b320aeb9" Sep 29 09:48:08 crc kubenswrapper[4991]: I0929 09:48:08.926115 4991 scope.go:117] "RemoveContainer" containerID="216d8c67e1329d41aef834e8edeaec99ed60a5fa420c9046d51851a12bf5080f" Sep 29 09:48:09 crc kubenswrapper[4991]: I0929 09:48:09.585706 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mm67g_f36a89bf-ee7b-4bf7-bc61-9ea099661bd1/kube-multus/2.log" Sep 29 09:48:09 crc kubenswrapper[4991]: I0929 09:48:09.586101 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mm67g" event={"ID":"f36a89bf-ee7b-4bf7-bc61-9ea099661bd1","Type":"ContainerStarted","Data":"8a010575ee83d1008518119ee53d346b786a91f41c5befb24ff4aadf06bb9972"} Sep 29 09:48:14 crc kubenswrapper[4991]: I0929 09:48:14.267987 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcw5r" Sep 29 09:48:16 crc kubenswrapper[4991]: I0929 09:48:16.925283 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:16 crc kubenswrapper[4991]: I0929 09:48:16.925990 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" Sep 29 09:48:17 crc kubenswrapper[4991]: I0929 09:48:17.130118 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz"] Sep 29 09:48:17 crc kubenswrapper[4991]: I0929 09:48:17.655036 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" event={"ID":"f46f29d1-1ea0-498a-9724-466a64601373","Type":"ContainerStarted","Data":"00eb570ef475e771b3825b24824e40402ddd34af0f1860c99c0344014f717258"} Sep 29 09:48:17 crc kubenswrapper[4991]: I0929 09:48:17.925673 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:17 crc kubenswrapper[4991]: I0929 09:48:17.926261 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" Sep 29 09:48:18 crc kubenswrapper[4991]: I0929 09:48:18.372066 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9"] Sep 29 09:48:18 crc kubenswrapper[4991]: W0929 09:48:18.382686 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded45932c_01e6_4a92_be25_ab5f613eb208.slice/crio-bcbaffe0248537e0dc2a9778bc8e0279315d06038fddc92e9ed9094b7a98ca64 WatchSource:0}: Error finding container bcbaffe0248537e0dc2a9778bc8e0279315d06038fddc92e9ed9094b7a98ca64: Status 404 returned error can't find the container with id bcbaffe0248537e0dc2a9778bc8e0279315d06038fddc92e9ed9094b7a98ca64 Sep 29 09:48:18 crc kubenswrapper[4991]: I0929 09:48:18.661727 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" event={"ID":"ed45932c-01e6-4a92-be25-ab5f613eb208","Type":"ContainerStarted","Data":"bcbaffe0248537e0dc2a9778bc8e0279315d06038fddc92e9ed9094b7a98ca64"} Sep 29 09:48:18 crc kubenswrapper[4991]: I0929 09:48:18.925595 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:18 crc kubenswrapper[4991]: I0929 09:48:18.926393 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" Sep 29 09:48:19 crc kubenswrapper[4991]: I0929 09:48:19.117204 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt"] Sep 29 09:48:19 crc kubenswrapper[4991]: I0929 09:48:19.669251 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" event={"ID":"38bf7687-8e95-4b5c-b346-381d71189652","Type":"ContainerStarted","Data":"198e90a1b52401cee57a8be6ce24f52b58e413c1b108443e939a86b007e4485a"} Sep 29 09:48:20 crc kubenswrapper[4991]: I0929 09:48:20.925542 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:20 crc kubenswrapper[4991]: I0929 09:48:20.926675 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:21 crc kubenswrapper[4991]: I0929 09:48:21.925438 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:21 crc kubenswrapper[4991]: I0929 09:48:21.925991 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.340506 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-l6njq"] Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.402220 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-bpxmw"] Sep 29 09:48:23 crc kubenswrapper[4991]: W0929 09:48:23.405876 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod801d6262_1556_4622_bc1e_16dc54de57ca.slice/crio-870d25b8ff337157581268d235cb5ca61929cd28a58b14b8111385539bf8e172 WatchSource:0}: Error finding container 870d25b8ff337157581268d235cb5ca61929cd28a58b14b8111385539bf8e172: Status 404 returned error can't find the container with id 870d25b8ff337157581268d235cb5ca61929cd28a58b14b8111385539bf8e172 Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.714838 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" event={"ID":"58e0e494-e32c-4e1a-87b6-0d0dfafed095","Type":"ContainerStarted","Data":"77a5e6061fc0b75d801c1d21de0558004183af746ec057a74110945b8fd6e356"} Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.716303 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" event={"ID":"801d6262-1556-4622-bc1e-16dc54de57ca","Type":"ContainerStarted","Data":"870d25b8ff337157581268d235cb5ca61929cd28a58b14b8111385539bf8e172"} Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.717907 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" event={"ID":"38bf7687-8e95-4b5c-b346-381d71189652","Type":"ContainerStarted","Data":"53beca15ec514a19ea0204461e4868ac1907d6f70f760d15f7068d00cef34153"} Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.719241 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" event={"ID":"f46f29d1-1ea0-498a-9724-466a64601373","Type":"ContainerStarted","Data":"2e2bf6708420639eb125ec82ed2ff5ad54785aec0a072fabacea76d8e1ce99d4"} Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.735766 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt" podStartSLOduration=32.853301026 podStartE2EDuration="36.735749458s" podCreationTimestamp="2025-09-29 09:47:47 +0000 UTC" firstStartedPulling="2025-09-29 09:48:19.129605201 +0000 UTC m=+634.985533229" lastFinishedPulling="2025-09-29 09:48:23.012053633 +0000 UTC m=+638.867981661" observedRunningTime="2025-09-29 09:48:23.732505572 +0000 UTC m=+639.588433600" watchObservedRunningTime="2025-09-29 09:48:23.735749458 +0000 UTC m=+639.591677486" Sep 29 09:48:23 crc kubenswrapper[4991]: I0929 09:48:23.764214 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz" podStartSLOduration=30.88498885 podStartE2EDuration="36.764192704s" podCreationTimestamp="2025-09-29 09:47:47 +0000 UTC" firstStartedPulling="2025-09-29 09:48:17.1426819 +0000 UTC m=+632.998609928" lastFinishedPulling="2025-09-29 09:48:23.021885734 +0000 UTC m=+638.877813782" observedRunningTime="2025-09-29 09:48:23.756355376 +0000 UTC m=+639.612283394" watchObservedRunningTime="2025-09-29 09:48:23.764192704 +0000 UTC m=+639.620120732" Sep 29 09:48:25 crc kubenswrapper[4991]: I0929 09:48:25.740245 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" event={"ID":"ed45932c-01e6-4a92-be25-ab5f613eb208","Type":"ContainerStarted","Data":"613f432e1523f896f0d2a08903f19eb5c4ff9ddcb01f32c98551b8a62bb1ac68"} Sep 29 09:48:25 crc kubenswrapper[4991]: I0929 09:48:25.755796 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-bscm9" podStartSLOduration=32.126800246 podStartE2EDuration="38.755775438s" podCreationTimestamp="2025-09-29 09:47:47 +0000 UTC" firstStartedPulling="2025-09-29 09:48:18.38575478 +0000 UTC m=+634.241682808" lastFinishedPulling="2025-09-29 09:48:25.014729972 +0000 UTC m=+640.870658000" observedRunningTime="2025-09-29 09:48:25.755557843 +0000 UTC m=+641.611485871" watchObservedRunningTime="2025-09-29 09:48:25.755775438 +0000 UTC m=+641.611703466" Sep 29 09:48:26 crc kubenswrapper[4991]: I0929 09:48:26.759216 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" event={"ID":"801d6262-1556-4622-bc1e-16dc54de57ca","Type":"ContainerStarted","Data":"a9ebcb155a0eeb203623cf49b0eb5850355adb0825d113535cc6023fedd865be"} Sep 29 09:48:26 crc kubenswrapper[4991]: I0929 09:48:26.760117 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:29 crc kubenswrapper[4991]: I0929 09:48:29.789778 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" event={"ID":"58e0e494-e32c-4e1a-87b6-0d0dfafed095","Type":"ContainerStarted","Data":"07363df79cd7d84059d5f80a6eaf64677f695ea144cb601513cdd45c59b32453"} Sep 29 09:48:29 crc kubenswrapper[4991]: I0929 09:48:29.790135 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:29 crc kubenswrapper[4991]: I0929 09:48:29.815218 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" podStartSLOduration=39.012705997 podStartE2EDuration="41.815199994s" podCreationTimestamp="2025-09-29 09:47:48 +0000 UTC" firstStartedPulling="2025-09-29 09:48:23.408066739 +0000 UTC m=+639.263994767" lastFinishedPulling="2025-09-29 09:48:26.210560736 +0000 UTC m=+642.066488764" observedRunningTime="2025-09-29 09:48:26.781342357 +0000 UTC m=+642.637270405" watchObservedRunningTime="2025-09-29 09:48:29.815199994 +0000 UTC m=+645.671128032" Sep 29 09:48:29 crc kubenswrapper[4991]: I0929 09:48:29.816037 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" podStartSLOduration=37.207655041 podStartE2EDuration="42.816031166s" podCreationTimestamp="2025-09-29 09:47:47 +0000 UTC" firstStartedPulling="2025-09-29 09:48:23.355965714 +0000 UTC m=+639.211893752" lastFinishedPulling="2025-09-29 09:48:28.964341809 +0000 UTC m=+644.820269877" observedRunningTime="2025-09-29 09:48:29.815294797 +0000 UTC m=+645.671222835" watchObservedRunningTime="2025-09-29 09:48:29.816031166 +0000 UTC m=+645.671959204" Sep 29 09:48:29 crc kubenswrapper[4991]: I0929 09:48:29.846963 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-l6njq" Sep 29 09:48:38 crc kubenswrapper[4991]: I0929 09:48:38.528489 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-bpxmw" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.730581 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jkfxh"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.731757 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.740732 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.741011 4991 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-f472s" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.741087 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.750169 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jkfxh"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.766976 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-th64x"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.767736 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-th64x" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.770363 4991 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xprg8" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.772145 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-stx6r"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.772884 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.775706 4991 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4sh6g" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.786577 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-th64x"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.800095 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-stx6r"] Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.809038 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59z8\" (UniqueName: \"kubernetes.io/projected/851cddb0-2bb3-4344-aaf9-e94e095a72a5-kube-api-access-x59z8\") pod \"cert-manager-cainjector-7f985d654d-jkfxh\" (UID: \"851cddb0-2bb3-4344-aaf9-e94e095a72a5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.910845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/b075dfdf-7089-442c-99eb-c93b2d5f3d6c-kube-api-access-qdlsz\") pod \"cert-manager-webhook-5655c58dd6-stx6r\" (UID: \"b075dfdf-7089-442c-99eb-c93b2d5f3d6c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.910938 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmp6\" (UniqueName: \"kubernetes.io/projected/cf2e698b-49b8-438e-aa92-318a0605f78f-kube-api-access-hnmp6\") pod \"cert-manager-5b446d88c5-th64x\" (UID: \"cf2e698b-49b8-438e-aa92-318a0605f78f\") " pod="cert-manager/cert-manager-5b446d88c5-th64x" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.910995 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59z8\" (UniqueName: \"kubernetes.io/projected/851cddb0-2bb3-4344-aaf9-e94e095a72a5-kube-api-access-x59z8\") pod \"cert-manager-cainjector-7f985d654d-jkfxh\" (UID: \"851cddb0-2bb3-4344-aaf9-e94e095a72a5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" Sep 29 09:48:39 crc kubenswrapper[4991]: I0929 09:48:39.929746 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59z8\" (UniqueName: \"kubernetes.io/projected/851cddb0-2bb3-4344-aaf9-e94e095a72a5-kube-api-access-x59z8\") pod \"cert-manager-cainjector-7f985d654d-jkfxh\" (UID: \"851cddb0-2bb3-4344-aaf9-e94e095a72a5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.012867 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/b075dfdf-7089-442c-99eb-c93b2d5f3d6c-kube-api-access-qdlsz\") pod \"cert-manager-webhook-5655c58dd6-stx6r\" (UID: \"b075dfdf-7089-442c-99eb-c93b2d5f3d6c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.012969 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmp6\" (UniqueName: \"kubernetes.io/projected/cf2e698b-49b8-438e-aa92-318a0605f78f-kube-api-access-hnmp6\") pod \"cert-manager-5b446d88c5-th64x\" (UID: \"cf2e698b-49b8-438e-aa92-318a0605f78f\") " pod="cert-manager/cert-manager-5b446d88c5-th64x" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.030900 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/b075dfdf-7089-442c-99eb-c93b2d5f3d6c-kube-api-access-qdlsz\") pod \"cert-manager-webhook-5655c58dd6-stx6r\" (UID: \"b075dfdf-7089-442c-99eb-c93b2d5f3d6c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.041115 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmp6\" (UniqueName: \"kubernetes.io/projected/cf2e698b-49b8-438e-aa92-318a0605f78f-kube-api-access-hnmp6\") pod \"cert-manager-5b446d88c5-th64x\" (UID: \"cf2e698b-49b8-438e-aa92-318a0605f78f\") " pod="cert-manager/cert-manager-5b446d88c5-th64x" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.049961 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.094610 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-th64x" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.102803 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.655933 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jkfxh"] Sep 29 09:48:40 crc kubenswrapper[4991]: W0929 09:48:40.663877 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod851cddb0_2bb3_4344_aaf9_e94e095a72a5.slice/crio-99ee8ef8c70dbe57627b95fa35bc41743240a1ebc4757ff99547ad5b6bdbb049 WatchSource:0}: Error finding container 99ee8ef8c70dbe57627b95fa35bc41743240a1ebc4757ff99547ad5b6bdbb049: Status 404 returned error can't find the container with id 99ee8ef8c70dbe57627b95fa35bc41743240a1ebc4757ff99547ad5b6bdbb049 Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.767851 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-th64x"] Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.773410 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-stx6r"] Sep 29 09:48:40 crc kubenswrapper[4991]: W0929 09:48:40.774374 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb075dfdf_7089_442c_99eb_c93b2d5f3d6c.slice/crio-352ec672b0b4b90a45f469c2203a87933f46df2e563dbb6585ac03ca9518ed3c WatchSource:0}: Error finding container 352ec672b0b4b90a45f469c2203a87933f46df2e563dbb6585ac03ca9518ed3c: Status 404 returned error can't find the container with id 352ec672b0b4b90a45f469c2203a87933f46df2e563dbb6585ac03ca9518ed3c Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.861178 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" event={"ID":"b075dfdf-7089-442c-99eb-c93b2d5f3d6c","Type":"ContainerStarted","Data":"352ec672b0b4b90a45f469c2203a87933f46df2e563dbb6585ac03ca9518ed3c"} Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.862601 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-th64x" event={"ID":"cf2e698b-49b8-438e-aa92-318a0605f78f","Type":"ContainerStarted","Data":"0a4bc7558854c80464fdc9755fd651a9770a64785920ee5d49be1d758736d3b5"} Sep 29 09:48:40 crc kubenswrapper[4991]: I0929 09:48:40.863727 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" event={"ID":"851cddb0-2bb3-4344-aaf9-e94e095a72a5","Type":"ContainerStarted","Data":"99ee8ef8c70dbe57627b95fa35bc41743240a1ebc4757ff99547ad5b6bdbb049"} Sep 29 09:48:44 crc kubenswrapper[4991]: I0929 09:48:44.887463 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" event={"ID":"851cddb0-2bb3-4344-aaf9-e94e095a72a5","Type":"ContainerStarted","Data":"6ac600e22fc1788131f370ad176ebdcf4df1194069c879f14309a6015cc31b5d"} Sep 29 09:48:44 crc kubenswrapper[4991]: I0929 09:48:44.890973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" event={"ID":"b075dfdf-7089-442c-99eb-c93b2d5f3d6c","Type":"ContainerStarted","Data":"f78bb8e4612de5b61ec2d72876d87853de82e390da052b749089517f411a0f8a"} Sep 29 09:48:44 crc kubenswrapper[4991]: I0929 09:48:44.891388 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:48:44 crc kubenswrapper[4991]: I0929 09:48:44.904502 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-jkfxh" podStartSLOduration=2.338285931 podStartE2EDuration="5.904482525s" podCreationTimestamp="2025-09-29 09:48:39 +0000 UTC" firstStartedPulling="2025-09-29 09:48:40.66648452 +0000 UTC m=+656.522412548" lastFinishedPulling="2025-09-29 09:48:44.232681114 +0000 UTC m=+660.088609142" observedRunningTime="2025-09-29 09:48:44.904282039 +0000 UTC m=+660.760210067" watchObservedRunningTime="2025-09-29 09:48:44.904482525 +0000 UTC m=+660.760410553" Sep 29 09:48:44 crc kubenswrapper[4991]: I0929 09:48:44.922279 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" podStartSLOduration=2.377276498 podStartE2EDuration="5.922258548s" podCreationTimestamp="2025-09-29 09:48:39 +0000 UTC" firstStartedPulling="2025-09-29 09:48:40.779661292 +0000 UTC m=+656.635589320" lastFinishedPulling="2025-09-29 09:48:44.324643332 +0000 UTC m=+660.180571370" observedRunningTime="2025-09-29 09:48:44.921417105 +0000 UTC m=+660.777345133" watchObservedRunningTime="2025-09-29 09:48:44.922258548 +0000 UTC m=+660.778186576" Sep 29 09:48:45 crc kubenswrapper[4991]: I0929 09:48:45.898677 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-th64x" event={"ID":"cf2e698b-49b8-438e-aa92-318a0605f78f","Type":"ContainerStarted","Data":"728005df3d5cc2472378ac974b339e01064cd7ea341d00c5b39d67f2f70a6031"} Sep 29 09:48:45 crc kubenswrapper[4991]: I0929 09:48:45.918070 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-th64x" podStartSLOduration=2.242728026 podStartE2EDuration="6.91804169s" podCreationTimestamp="2025-09-29 09:48:39 +0000 UTC" firstStartedPulling="2025-09-29 09:48:40.779307273 +0000 UTC m=+656.635235301" lastFinishedPulling="2025-09-29 09:48:45.454620927 +0000 UTC m=+661.310548965" observedRunningTime="2025-09-29 09:48:45.913353415 +0000 UTC m=+661.769281463" watchObservedRunningTime="2025-09-29 09:48:45.91804169 +0000 UTC m=+661.773969718" Sep 29 09:48:50 crc kubenswrapper[4991]: I0929 09:48:50.105702 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-stx6r" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.372790 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm"] Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.375103 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.377517 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.387481 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm"] Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.445129 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.445224 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvn2\" (UniqueName: \"kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.445340 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.546384 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.546467 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvn2\" (UniqueName: \"kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.546505 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.546924 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.546993 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.579632 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvn2\" (UniqueName: \"kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2\") pod \"e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.587866 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x"] Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.605787 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.627342 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x"] Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.650178 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.650258 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.650280 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.693963 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.751422 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.751731 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.751861 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.752739 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.752752 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.791704 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx\") pod \"c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:14 crc kubenswrapper[4991]: I0929 09:49:14.930502 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:15 crc kubenswrapper[4991]: I0929 09:49:15.032675 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm"] Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:15.130157 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerStarted","Data":"a88813f639efe059bae921452ccd1baf6843eb141339d0ec356b5ca97a622253"} Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:15.374031 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x"] Sep 29 09:49:18 crc kubenswrapper[4991]: W0929 09:49:15.379491 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a1fba17_86ce_4573_8199_011457e2c391.slice/crio-92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87 WatchSource:0}: Error finding container 92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87: Status 404 returned error can't find the container with id 92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87 Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:16.138372 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerStarted","Data":"92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87"} Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:17.145387 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerStarted","Data":"8cd33f4f8b8cdee010964a41e2540cf4174722cbfedffa6003cf090414d5b3a5"} Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:18.153598 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a1fba17-86ce-4573-8199-011457e2c391" containerID="bb20aa76238ca9ea6af7faf6db22494c93dd8ba3dd281eb50314ea889157617b" exitCode=0 Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:18.153710 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerDied","Data":"bb20aa76238ca9ea6af7faf6db22494c93dd8ba3dd281eb50314ea889157617b"} Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:18.156962 4991 generic.go:334] "Generic (PLEG): container finished" podID="67372b02-362b-4463-a437-359886d014af" containerID="8cd33f4f8b8cdee010964a41e2540cf4174722cbfedffa6003cf090414d5b3a5" exitCode=0 Sep 29 09:49:18 crc kubenswrapper[4991]: I0929 09:49:18.156987 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerDied","Data":"8cd33f4f8b8cdee010964a41e2540cf4174722cbfedffa6003cf090414d5b3a5"} Sep 29 09:49:22 crc kubenswrapper[4991]: I0929 09:49:22.187284 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a1fba17-86ce-4573-8199-011457e2c391" containerID="de887cf6793124e0cd01169361916961a94f08399fe6e7ada573af973901c5cd" exitCode=0 Sep 29 09:49:22 crc kubenswrapper[4991]: I0929 09:49:22.187364 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerDied","Data":"de887cf6793124e0cd01169361916961a94f08399fe6e7ada573af973901c5cd"} Sep 29 09:49:22 crc kubenswrapper[4991]: I0929 09:49:22.191068 4991 generic.go:334] "Generic (PLEG): container finished" podID="67372b02-362b-4463-a437-359886d014af" containerID="852ea5a0f3468f7650c597d74ffca011c235cf0c0e8366aedb9658b110912826" exitCode=0 Sep 29 09:49:22 crc kubenswrapper[4991]: I0929 09:49:22.191102 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerDied","Data":"852ea5a0f3468f7650c597d74ffca011c235cf0c0e8366aedb9658b110912826"} Sep 29 09:49:25 crc kubenswrapper[4991]: I0929 09:49:25.216875 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerStarted","Data":"add277f618fa15d6bea50c44ee1e443fc93e567b4c39747043dcce5b85ded5e0"} Sep 29 09:49:25 crc kubenswrapper[4991]: I0929 09:49:25.218899 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerStarted","Data":"6aba703983dbc9fe9127f700be884ae8090ec380ff4010926c7cb202d12b51cf"} Sep 29 09:49:25 crc kubenswrapper[4991]: I0929 09:49:25.240584 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" podStartSLOduration=8.064164092 podStartE2EDuration="11.240562101s" podCreationTimestamp="2025-09-29 09:49:14 +0000 UTC" firstStartedPulling="2025-09-29 09:49:18.156176962 +0000 UTC m=+694.012104990" lastFinishedPulling="2025-09-29 09:49:21.332574951 +0000 UTC m=+697.188502999" observedRunningTime="2025-09-29 09:49:25.237674265 +0000 UTC m=+701.093602303" watchObservedRunningTime="2025-09-29 09:49:25.240562101 +0000 UTC m=+701.096490119" Sep 29 09:49:25 crc kubenswrapper[4991]: I0929 09:49:25.260802 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" podStartSLOduration=7.993533103 podStartE2EDuration="11.2607881s" podCreationTimestamp="2025-09-29 09:49:14 +0000 UTC" firstStartedPulling="2025-09-29 09:49:18.158378791 +0000 UTC m=+694.014306819" lastFinishedPulling="2025-09-29 09:49:21.425633788 +0000 UTC m=+697.281561816" observedRunningTime="2025-09-29 09:49:25.258802727 +0000 UTC m=+701.114730785" watchObservedRunningTime="2025-09-29 09:49:25.2607881 +0000 UTC m=+701.116716128" Sep 29 09:49:26 crc kubenswrapper[4991]: I0929 09:49:26.234782 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a1fba17-86ce-4573-8199-011457e2c391" containerID="add277f618fa15d6bea50c44ee1e443fc93e567b4c39747043dcce5b85ded5e0" exitCode=0 Sep 29 09:49:26 crc kubenswrapper[4991]: I0929 09:49:26.234891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerDied","Data":"add277f618fa15d6bea50c44ee1e443fc93e567b4c39747043dcce5b85ded5e0"} Sep 29 09:49:26 crc kubenswrapper[4991]: I0929 09:49:26.239618 4991 generic.go:334] "Generic (PLEG): container finished" podID="67372b02-362b-4463-a437-359886d014af" containerID="6aba703983dbc9fe9127f700be884ae8090ec380ff4010926c7cb202d12b51cf" exitCode=0 Sep 29 09:49:26 crc kubenswrapper[4991]: I0929 09:49:26.239692 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerDied","Data":"6aba703983dbc9fe9127f700be884ae8090ec380ff4010926c7cb202d12b51cf"} Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.554271 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.559929 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.566545 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util\") pod \"7a1fba17-86ce-4573-8199-011457e2c391\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.566703 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle\") pod \"7a1fba17-86ce-4573-8199-011457e2c391\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.566762 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx\") pod \"7a1fba17-86ce-4573-8199-011457e2c391\" (UID: \"7a1fba17-86ce-4573-8199-011457e2c391\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.567907 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle" (OuterVolumeSpecName: "bundle") pod "7a1fba17-86ce-4573-8199-011457e2c391" (UID: "7a1fba17-86ce-4573-8199-011457e2c391"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.577509 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx" (OuterVolumeSpecName: "kube-api-access-7kgwx") pod "7a1fba17-86ce-4573-8199-011457e2c391" (UID: "7a1fba17-86ce-4573-8199-011457e2c391"). InnerVolumeSpecName "kube-api-access-7kgwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.597096 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util" (OuterVolumeSpecName: "util") pod "7a1fba17-86ce-4573-8199-011457e2c391" (UID: "7a1fba17-86ce-4573-8199-011457e2c391"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.667898 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle\") pod \"67372b02-362b-4463-a437-359886d014af\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.668746 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle" (OuterVolumeSpecName: "bundle") pod "67372b02-362b-4463-a437-359886d014af" (UID: "67372b02-362b-4463-a437-359886d014af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.668811 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfvn2\" (UniqueName: \"kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2\") pod \"67372b02-362b-4463-a437-359886d014af\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.668918 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util\") pod \"67372b02-362b-4463-a437-359886d014af\" (UID: \"67372b02-362b-4463-a437-359886d014af\") " Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.669356 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.669373 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kgwx\" (UniqueName: \"kubernetes.io/projected/7a1fba17-86ce-4573-8199-011457e2c391-kube-api-access-7kgwx\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.669382 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.669392 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a1fba17-86ce-4573-8199-011457e2c391-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.672131 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2" (OuterVolumeSpecName: "kube-api-access-vfvn2") pod "67372b02-362b-4463-a437-359886d014af" (UID: "67372b02-362b-4463-a437-359886d014af"). InnerVolumeSpecName "kube-api-access-vfvn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.679698 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util" (OuterVolumeSpecName: "util") pod "67372b02-362b-4463-a437-359886d014af" (UID: "67372b02-362b-4463-a437-359886d014af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.771034 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfvn2\" (UniqueName: \"kubernetes.io/projected/67372b02-362b-4463-a437-359886d014af-kube-api-access-vfvn2\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:27 crc kubenswrapper[4991]: I0929 09:49:27.771084 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67372b02-362b-4463-a437-359886d014af-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.269938 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" event={"ID":"7a1fba17-86ce-4573-8199-011457e2c391","Type":"ContainerDied","Data":"92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87"} Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.270038 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bb1c401b7c0a6538b503361c147f86d467aed0666dc4f1d12808dc52126f87" Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.269992 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x" Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.273251 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" event={"ID":"67372b02-362b-4463-a437-359886d014af","Type":"ContainerDied","Data":"a88813f639efe059bae921452ccd1baf6843eb141339d0ec356b5ca97a622253"} Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.273296 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88813f639efe059bae921452ccd1baf6843eb141339d0ec356b5ca97a622253" Sep 29 09:49:28 crc kubenswrapper[4991]: I0929 09:49:28.273429 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.280558 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-fcc886d58-s6dct"] Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282281 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67372b02-362b-4463-a437-359886d014af" containerName="pull" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282346 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="67372b02-362b-4463-a437-359886d014af" containerName="pull" Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282424 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="util" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282470 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="util" Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282520 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282578 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282660 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67372b02-362b-4463-a437-359886d014af" containerName="util" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282720 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="67372b02-362b-4463-a437-359886d014af" containerName="util" Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282787 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="pull" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282843 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="pull" Sep 29 09:49:33 crc kubenswrapper[4991]: E0929 09:49:33.282909 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67372b02-362b-4463-a437-359886d014af" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.282985 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="67372b02-362b-4463-a437-359886d014af" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.283203 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="67372b02-362b-4463-a437-359886d014af" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.283270 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1fba17-86ce-4573-8199-011457e2c391" containerName="extract" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.283915 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.288060 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.288218 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.288342 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-g62sc" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.295386 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-fcc886d58-s6dct"] Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.354406 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfpfm\" (UniqueName: \"kubernetes.io/projected/53f2a78a-64f1-4029-9994-dbfef9f66476-kube-api-access-zfpfm\") pod \"cluster-logging-operator-fcc886d58-s6dct\" (UID: \"53f2a78a-64f1-4029-9994-dbfef9f66476\") " pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.455692 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfpfm\" (UniqueName: \"kubernetes.io/projected/53f2a78a-64f1-4029-9994-dbfef9f66476-kube-api-access-zfpfm\") pod \"cluster-logging-operator-fcc886d58-s6dct\" (UID: \"53f2a78a-64f1-4029-9994-dbfef9f66476\") " pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.490043 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfpfm\" (UniqueName: \"kubernetes.io/projected/53f2a78a-64f1-4029-9994-dbfef9f66476-kube-api-access-zfpfm\") pod \"cluster-logging-operator-fcc886d58-s6dct\" (UID: \"53f2a78a-64f1-4029-9994-dbfef9f66476\") " pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" Sep 29 09:49:33 crc kubenswrapper[4991]: I0929 09:49:33.601988 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" Sep 29 09:49:34 crc kubenswrapper[4991]: I0929 09:49:34.094765 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-fcc886d58-s6dct"] Sep 29 09:49:34 crc kubenswrapper[4991]: W0929 09:49:34.108986 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53f2a78a_64f1_4029_9994_dbfef9f66476.slice/crio-ed307e67dec336b9a76c8fda39f8ff68bcb0d0238087c4e42cfe660c6f58308c WatchSource:0}: Error finding container ed307e67dec336b9a76c8fda39f8ff68bcb0d0238087c4e42cfe660c6f58308c: Status 404 returned error can't find the container with id ed307e67dec336b9a76c8fda39f8ff68bcb0d0238087c4e42cfe660c6f58308c Sep 29 09:49:34 crc kubenswrapper[4991]: I0929 09:49:34.307119 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" event={"ID":"53f2a78a-64f1-4029-9994-dbfef9f66476","Type":"ContainerStarted","Data":"ed307e67dec336b9a76c8fda39f8ff68bcb0d0238087c4e42cfe660c6f58308c"} Sep 29 09:49:40 crc kubenswrapper[4991]: I0929 09:49:40.370753 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" event={"ID":"53f2a78a-64f1-4029-9994-dbfef9f66476","Type":"ContainerStarted","Data":"b7bf1f81bcc1a8b7c317bc7f78d9c6d39b60db006cad1650c1c09bd7e1e24b76"} Sep 29 09:49:40 crc kubenswrapper[4991]: I0929 09:49:40.387966 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-fcc886d58-s6dct" podStartSLOduration=1.718367073 podStartE2EDuration="7.387933866s" podCreationTimestamp="2025-09-29 09:49:33 +0000 UTC" firstStartedPulling="2025-09-29 09:49:34.112741984 +0000 UTC m=+709.968670022" lastFinishedPulling="2025-09-29 09:49:39.782308767 +0000 UTC m=+715.638236815" observedRunningTime="2025-09-29 09:49:40.385294295 +0000 UTC m=+716.241222323" watchObservedRunningTime="2025-09-29 09:49:40.387933866 +0000 UTC m=+716.243861894" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.247276 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj"] Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.248264 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.256727 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.257106 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-h6n22" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.260445 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.260639 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.265005 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.272997 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj"] Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.273442 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-webhook-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.273636 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-manager-config\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.273741 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5k69\" (UniqueName: \"kubernetes.io/projected/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-kube-api-access-j5k69\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.273824 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-apiservice-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.273922 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.277625 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.375168 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-webhook-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.375482 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-manager-config\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.375522 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5k69\" (UniqueName: \"kubernetes.io/projected/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-kube-api-access-j5k69\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.375554 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-apiservice-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.375591 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.377562 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-manager-config\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.382014 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.382418 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-webhook-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.398599 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5k69\" (UniqueName: \"kubernetes.io/projected/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-kube-api-access-j5k69\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.400636 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8dd3043-cf98-4a8f-b175-ff3b2ad25381-apiservice-cert\") pod \"loki-operator-controller-manager-54cf9db6d-sdkfj\" (UID: \"d8dd3043-cf98-4a8f-b175-ff3b2ad25381\") " pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:41 crc kubenswrapper[4991]: I0929 09:49:41.610997 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:42 crc kubenswrapper[4991]: I0929 09:49:42.209154 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj"] Sep 29 09:49:42 crc kubenswrapper[4991]: W0929 09:49:42.214981 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8dd3043_cf98_4a8f_b175_ff3b2ad25381.slice/crio-4f18a5e02fdf0b6f15f9fb0b6c0b254df8ffe0dfe7da441c3be1df91cfc18197 WatchSource:0}: Error finding container 4f18a5e02fdf0b6f15f9fb0b6c0b254df8ffe0dfe7da441c3be1df91cfc18197: Status 404 returned error can't find the container with id 4f18a5e02fdf0b6f15f9fb0b6c0b254df8ffe0dfe7da441c3be1df91cfc18197 Sep 29 09:49:42 crc kubenswrapper[4991]: I0929 09:49:42.383774 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" event={"ID":"d8dd3043-cf98-4a8f-b175-ff3b2ad25381","Type":"ContainerStarted","Data":"4f18a5e02fdf0b6f15f9fb0b6c0b254df8ffe0dfe7da441c3be1df91cfc18197"} Sep 29 09:49:45 crc kubenswrapper[4991]: I0929 09:49:45.409416 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" event={"ID":"d8dd3043-cf98-4a8f-b175-ff3b2ad25381","Type":"ContainerStarted","Data":"12ca38001c127e7c27ce8a016b8f6a1f665e32d91f8aaba2a3b32bfe5ceaceb0"} Sep 29 09:49:52 crc kubenswrapper[4991]: I0929 09:49:52.468022 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" event={"ID":"d8dd3043-cf98-4a8f-b175-ff3b2ad25381","Type":"ContainerStarted","Data":"0a0cff7dd7910232f1fb054cc50708996d419384a4f61f958443a73a4b5a2c92"} Sep 29 09:49:52 crc kubenswrapper[4991]: I0929 09:49:52.468557 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:52 crc kubenswrapper[4991]: I0929 09:49:52.497007 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" Sep 29 09:49:52 crc kubenswrapper[4991]: I0929 09:49:52.503355 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-54cf9db6d-sdkfj" podStartSLOduration=1.8997065260000001 podStartE2EDuration="11.503340701s" podCreationTimestamp="2025-09-29 09:49:41 +0000 UTC" firstStartedPulling="2025-09-29 09:49:42.218768093 +0000 UTC m=+718.074696121" lastFinishedPulling="2025-09-29 09:49:51.822402258 +0000 UTC m=+727.678330296" observedRunningTime="2025-09-29 09:49:52.498977374 +0000 UTC m=+728.354905402" watchObservedRunningTime="2025-09-29 09:49:52.503340701 +0000 UTC m=+728.359268729" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.822403 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.824908 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.828684 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.828755 4991 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-n26tl" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.829161 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.839924 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.927407 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfdxw\" (UniqueName: \"kubernetes.io/projected/14af6fa1-8e29-4416-bc42-b0e87f59e819-kube-api-access-bfdxw\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:57 crc kubenswrapper[4991]: I0929 09:49:57.927524 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.029280 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.029550 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfdxw\" (UniqueName: \"kubernetes.io/projected/14af6fa1-8e29-4416-bc42-b0e87f59e819-kube-api-access-bfdxw\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.032350 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.032394 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7d36bfa3e39b8d828aea28a2c1dc9896f9c86f68d4d51eac0d16ef33fe6402fb/globalmount\"" pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.049713 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfdxw\" (UniqueName: \"kubernetes.io/projected/14af6fa1-8e29-4416-bc42-b0e87f59e819-kube-api-access-bfdxw\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.060817 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f287834a-0dfe-43f2-ac9f-eb70ba7def58\") pod \"minio\" (UID: \"14af6fa1-8e29-4416-bc42-b0e87f59e819\") " pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.156487 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Sep 29 09:49:58 crc kubenswrapper[4991]: I0929 09:49:58.640249 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Sep 29 09:49:58 crc kubenswrapper[4991]: W0929 09:49:58.649670 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14af6fa1_8e29_4416_bc42_b0e87f59e819.slice/crio-85ac12a4f8af340400879f77b81ead7e4f467238aac302b8103841e79a24fcde WatchSource:0}: Error finding container 85ac12a4f8af340400879f77b81ead7e4f467238aac302b8103841e79a24fcde: Status 404 returned error can't find the container with id 85ac12a4f8af340400879f77b81ead7e4f467238aac302b8103841e79a24fcde Sep 29 09:49:59 crc kubenswrapper[4991]: I0929 09:49:59.513070 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"14af6fa1-8e29-4416-bc42-b0e87f59e819","Type":"ContainerStarted","Data":"85ac12a4f8af340400879f77b81ead7e4f467238aac302b8103841e79a24fcde"} Sep 29 09:50:02 crc kubenswrapper[4991]: I0929 09:50:02.535018 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"14af6fa1-8e29-4416-bc42-b0e87f59e819","Type":"ContainerStarted","Data":"f2f3ea7a7bf58dc92b61fb58bf3911b3daf055a61c0227503b8b5bfcdf8da8d2"} Sep 29 09:50:02 crc kubenswrapper[4991]: I0929 09:50:02.551270 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.731854204 podStartE2EDuration="8.551257391s" podCreationTimestamp="2025-09-29 09:49:54 +0000 UTC" firstStartedPulling="2025-09-29 09:49:58.651623244 +0000 UTC m=+734.507551272" lastFinishedPulling="2025-09-29 09:50:01.471026431 +0000 UTC m=+737.326954459" observedRunningTime="2025-09-29 09:50:02.550321496 +0000 UTC m=+738.406249564" watchObservedRunningTime="2025-09-29 09:50:02.551257391 +0000 UTC m=+738.407185419" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.231767 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-67c9b4c785-f87nt"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.237483 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.242529 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.242749 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.243025 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.243313 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.243355 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-lth4p" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.262875 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-67c9b4c785-f87nt"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.361734 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-config\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.361844 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-http\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.361880 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-logging-loki-ca-bundle\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.361922 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.361963 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqzw\" (UniqueName: \"kubernetes.io/projected/c7166108-60e3-494c-8800-aab366098632-kube-api-access-5pqzw\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.428842 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-7454676c57-qpzss"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.430403 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.432819 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.433005 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.433476 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.440502 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-7454676c57-qpzss"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.463544 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-config\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.463640 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-http\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.463679 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-logging-loki-ca-bundle\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.463724 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.463753 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqzw\" (UniqueName: \"kubernetes.io/projected/c7166108-60e3-494c-8800-aab366098632-kube-api-access-5pqzw\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.465239 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-logging-loki-ca-bundle\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.465305 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7166108-60e3-494c-8800-aab366098632-config\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.470351 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.484125 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.486283 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.487346 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c7166108-60e3-494c-8800-aab366098632-logging-loki-distributor-http\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.492255 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.492549 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.492592 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.517589 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqzw\" (UniqueName: \"kubernetes.io/projected/c7166108-60e3-494c-8800-aab366098632-kube-api-access-5pqzw\") pod \"logging-loki-distributor-67c9b4c785-f87nt\" (UID: \"c7166108-60e3-494c-8800-aab366098632\") " pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.558910 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.564972 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-ca-bundle\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565038 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-http\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565061 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565104 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-s3\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565237 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-grpc\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565271 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb267\" (UniqueName: \"kubernetes.io/projected/122b9557-bde1-41fd-aaed-f9120e9b00c2-kube-api-access-vb267\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565485 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565575 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-config\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565595 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwsz\" (UniqueName: \"kubernetes.io/projected/16402e1f-4ee8-4845-964b-e7d8d153cd1c-kube-api-access-6mwsz\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565624 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.565654 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-config\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.609995 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.611190 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.614601 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.614906 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.615000 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.615205 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.615344 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.620006 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.622149 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.633423 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-6fmhd" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.637064 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.639607 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n"] Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.666925 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-config\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.666988 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwsz\" (UniqueName: \"kubernetes.io/projected/16402e1f-4ee8-4845-964b-e7d8d153cd1c-kube-api-access-6mwsz\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667019 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667050 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-rbac\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667076 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-config\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667104 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tenants\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667130 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-ca-bundle\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667170 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-http\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667193 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667226 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667250 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667275 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667304 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-s3\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667339 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-grpc\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667377 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667400 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667423 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb267\" (UniqueName: \"kubernetes.io/projected/122b9557-bde1-41fd-aaed-f9120e9b00c2-kube-api-access-vb267\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667468 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmshv\" (UniqueName: \"kubernetes.io/projected/6d85d2ab-609c-4e04-a608-b247d5d61ddc-kube-api-access-gmshv\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.667520 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.669404 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-config\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.670603 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.671859 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-ca-bundle\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.672258 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16402e1f-4ee8-4845-964b-e7d8d153cd1c-config\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.672449 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.674240 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/122b9557-bde1-41fd-aaed-f9120e9b00c2-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.682644 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-s3\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.683463 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-http\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.684031 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/16402e1f-4ee8-4845-964b-e7d8d153cd1c-logging-loki-querier-grpc\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.702194 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwsz\" (UniqueName: \"kubernetes.io/projected/16402e1f-4ee8-4845-964b-e7d8d153cd1c-kube-api-access-6mwsz\") pod \"logging-loki-querier-7454676c57-qpzss\" (UID: \"16402e1f-4ee8-4845-964b-e7d8d153cd1c\") " pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.704170 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb267\" (UniqueName: \"kubernetes.io/projected/122b9557-bde1-41fd-aaed-f9120e9b00c2-kube-api-access-vb267\") pod \"logging-loki-query-frontend-6b467cdd84-2pdst\" (UID: \"122b9557-bde1-41fd-aaed-f9120e9b00c2\") " pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.756444 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770610 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770651 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770675 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770695 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770724 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770741 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770769 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770811 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmshv\" (UniqueName: \"kubernetes.io/projected/6d85d2ab-609c-4e04-a608-b247d5d61ddc-kube-api-access-gmshv\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770828 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770864 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz58n\" (UniqueName: \"kubernetes.io/projected/1c21cca2-5bc1-4174-a45c-f928fa2c2588-kube-api-access-sz58n\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770885 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-rbac\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770900 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-rbac\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770918 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tenants\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.770938 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tenants\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.772499 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.777205 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.778880 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.780187 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-rbac\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.784555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.785334 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.785470 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d85d2ab-609c-4e04-a608-b247d5d61ddc-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.785480 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d85d2ab-609c-4e04-a608-b247d5d61ddc-tenants\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.795386 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmshv\" (UniqueName: \"kubernetes.io/projected/6d85d2ab-609c-4e04-a608-b247d5d61ddc-kube-api-access-gmshv\") pod \"logging-loki-gateway-78756f9d5f-b9nwx\" (UID: \"6d85d2ab-609c-4e04-a608-b247d5d61ddc\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.854909 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874099 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz58n\" (UniqueName: \"kubernetes.io/projected/1c21cca2-5bc1-4174-a45c-f928fa2c2588-kube-api-access-sz58n\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874162 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-rbac\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874183 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tenants\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874220 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874369 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874428 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874450 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.874486 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.876034 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-lokistack-gateway\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.876275 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-rbac\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.876373 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.876494 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.881752 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.881782 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tenants\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.882259 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1c21cca2-5bc1-4174-a45c-f928fa2c2588-tls-secret\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.893134 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz58n\" (UniqueName: \"kubernetes.io/projected/1c21cca2-5bc1-4174-a45c-f928fa2c2588-kube-api-access-sz58n\") pod \"logging-loki-gateway-78756f9d5f-2vl5n\" (UID: \"1c21cca2-5bc1-4174-a45c-f928fa2c2588\") " pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:06 crc kubenswrapper[4991]: I0929 09:50:06.983039 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.002364 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.015831 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-7454676c57-qpzss"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.073222 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-67c9b4c785-f87nt"] Sep 29 09:50:07 crc kubenswrapper[4991]: W0929 09:50:07.077361 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7166108_60e3_494c_8800_aab366098632.slice/crio-d399d827e00eefb4e1071cfaf53e0f4c7a18f14e8d84c99d273180adf1d31f56 WatchSource:0}: Error finding container d399d827e00eefb4e1071cfaf53e0f4c7a18f14e8d84c99d273180adf1d31f56: Status 404 returned error can't find the container with id d399d827e00eefb4e1071cfaf53e0f4c7a18f14e8d84c99d273180adf1d31f56 Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.133578 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst"] Sep 29 09:50:07 crc kubenswrapper[4991]: W0929 09:50:07.142467 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod122b9557_bde1_41fd_aaed_f9120e9b00c2.slice/crio-2cd99553700d8ebb5157f5b1cfe67a02668ee5147120b60a94d805e50ed57881 WatchSource:0}: Error finding container 2cd99553700d8ebb5157f5b1cfe67a02668ee5147120b60a94d805e50ed57881: Status 404 returned error can't find the container with id 2cd99553700d8ebb5157f5b1cfe67a02668ee5147120b60a94d805e50ed57881 Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.416626 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.417398 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.419993 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.424058 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.436300 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.487635 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488180 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488243 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488317 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488396 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488440 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4zx\" (UniqueName: \"kubernetes.io/projected/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-kube-api-access-zj4zx\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.488493 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-config\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.494291 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84133852-4230-482d-9616-2b9069d17966\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84133852-4230-482d-9616-2b9069d17966\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.494411 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.500195 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.504373 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.504514 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.504536 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.508705 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.556842 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.564429 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.572459 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.572631 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.572715 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Sep 29 09:50:07 crc kubenswrapper[4991]: W0929 09:50:07.574728 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c21cca2_5bc1_4174_a45c_f928fa2c2588.slice/crio-a015e98b31498271f060d1ed998f771c901a935192c939f4bb09c9236ec3b306 WatchSource:0}: Error finding container a015e98b31498271f060d1ed998f771c901a935192c939f4bb09c9236ec3b306: Status 404 returned error can't find the container with id a015e98b31498271f060d1ed998f771c901a935192c939f4bb09c9236ec3b306 Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.577508 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" event={"ID":"122b9557-bde1-41fd-aaed-f9120e9b00c2","Type":"ContainerStarted","Data":"2cd99553700d8ebb5157f5b1cfe67a02668ee5147120b60a94d805e50ed57881"} Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.580714 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" event={"ID":"c7166108-60e3-494c-8800-aab366098632","Type":"ContainerStarted","Data":"d399d827e00eefb4e1071cfaf53e0f4c7a18f14e8d84c99d273180adf1d31f56"} Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.582540 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" event={"ID":"6d85d2ab-609c-4e04-a608-b247d5d61ddc","Type":"ContainerStarted","Data":"218d6a2a0346c3ac321c93af3e0213a942e1e08c64dbeb12a89c64b5db5eabc0"} Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.595196 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.595779 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" event={"ID":"16402e1f-4ee8-4845-964b-e7d8d153cd1c","Type":"ContainerStarted","Data":"b4075892ac00b41ee731319486bc66fc43808ced5a9b055882951da5f0e2eb60"} Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.596653 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.596773 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.596862 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.596991 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597106 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597144 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj4zx\" (UniqueName: \"kubernetes.io/projected/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-kube-api-access-zj4zx\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597178 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-config\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597201 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rxv\" (UniqueName: \"kubernetes.io/projected/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-kube-api-access-62rxv\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597663 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-config\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597703 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597729 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597767 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597801 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-84133852-4230-482d-9616-2b9069d17966\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84133852-4230-482d-9616-2b9069d17966\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597827 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.597862 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.600292 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-config\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.603685 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.603737 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/19617f7f1422083166b60a7cb8bce50dfbdb5fe1c5392ba08eba63fe13792295/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.603974 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.604066 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-84133852-4230-482d-9616-2b9069d17966\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84133852-4230-482d-9616-2b9069d17966\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/948757e8242538848aa9f27fb833bb0f609dc73d4f43c38d40fac5dbfea15fe4/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.604689 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.604982 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.605818 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.613563 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.614895 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj4zx\" (UniqueName: \"kubernetes.io/projected/6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8-kube-api-access-zj4zx\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.635130 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca32d2e3-cf6f-4541-8fc1-54ba8163023a\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.638877 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-84133852-4230-482d-9616-2b9069d17966\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84133852-4230-482d-9616-2b9069d17966\") pod \"logging-loki-ingester-0\" (UID: \"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8\") " pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.699396 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62rxv\" (UniqueName: \"kubernetes.io/projected/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-kube-api-access-62rxv\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.699458 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-config\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.699484 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700113 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtpq\" (UniqueName: \"kubernetes.io/projected/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-kube-api-access-xqtpq\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700177 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700258 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700360 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700467 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700530 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700555 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700579 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700599 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.700622 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.701229 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.701516 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-config\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.702857 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.704415 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.704469 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.704561 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.704588 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/678e9c82dc923dc22673886ecc2fbafc8d200eb16e1ee176681aaebd20ccabd9/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.717295 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62rxv\" (UniqueName: \"kubernetes.io/projected/cdccd5ff-76dc-4bd3-97a7-5855c701d49e-kube-api-access-62rxv\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.726833 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c0ba69a1-20f1-4907-b908-353d847afe4b\") pod \"logging-loki-compactor-0\" (UID: \"cdccd5ff-76dc-4bd3-97a7-5855c701d49e\") " pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.735775 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.802621 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.802680 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.802736 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.802766 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.802986 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqtpq\" (UniqueName: \"kubernetes.io/projected/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-kube-api-access-xqtpq\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.803043 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.803150 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.803557 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.803735 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.808259 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.808294 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.808322 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.808348 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15650529bb26666bcb7c1daabc32ec5de29c5602b2fe97ced6e36576db789266/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.808859 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.827820 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqtpq\" (UniqueName: \"kubernetes.io/projected/4d978c8f-2c3b-4824-8c63-8f5eb56c92f8-kube-api-access-xqtpq\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.861928 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65b36ca9-68d0-496d-a4e4-5a5cfe956571\") pod \"logging-loki-index-gateway-0\" (UID: \"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8\") " pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.968914 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.976088 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:07 crc kubenswrapper[4991]: W0929 09:50:07.976506 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0b1bf5_f8e1_49e8_bc70_75fbcf826bc8.slice/crio-2081c8c5c9cdf5744f89c8752210a3c5270773fb6b7bc93ecbaad56905b77f01 WatchSource:0}: Error finding container 2081c8c5c9cdf5744f89c8752210a3c5270773fb6b7bc93ecbaad56905b77f01: Status 404 returned error can't find the container with id 2081c8c5c9cdf5744f89c8752210a3c5270773fb6b7bc93ecbaad56905b77f01 Sep 29 09:50:07 crc kubenswrapper[4991]: I0929 09:50:07.982874 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.449407 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.457134 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Sep 29 09:50:08 crc kubenswrapper[4991]: W0929 09:50:08.484398 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdccd5ff_76dc_4bd3_97a7_5855c701d49e.slice/crio-7cab60623cc07c9cf785fd6a7a2d2fb9b27be951235ebf96bdb89cc4323d81f9 WatchSource:0}: Error finding container 7cab60623cc07c9cf785fd6a7a2d2fb9b27be951235ebf96bdb89cc4323d81f9: Status 404 returned error can't find the container with id 7cab60623cc07c9cf785fd6a7a2d2fb9b27be951235ebf96bdb89cc4323d81f9 Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.612910 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8","Type":"ContainerStarted","Data":"2081c8c5c9cdf5744f89c8752210a3c5270773fb6b7bc93ecbaad56905b77f01"} Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.618979 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8","Type":"ContainerStarted","Data":"401d05e04780c1f91f3493fce4feceb1efff4f23fb6c333e40f6600cce330c6b"} Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.620994 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" event={"ID":"1c21cca2-5bc1-4174-a45c-f928fa2c2588","Type":"ContainerStarted","Data":"a015e98b31498271f060d1ed998f771c901a935192c939f4bb09c9236ec3b306"} Sep 29 09:50:08 crc kubenswrapper[4991]: I0929 09:50:08.626986 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cdccd5ff-76dc-4bd3-97a7-5855c701d49e","Type":"ContainerStarted","Data":"7cab60623cc07c9cf785fd6a7a2d2fb9b27be951235ebf96bdb89cc4323d81f9"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.665470 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cdccd5ff-76dc-4bd3-97a7-5855c701d49e","Type":"ContainerStarted","Data":"cd90a14997faee030350b6ff890dc663d7d5a46b9d87a105794c8994d08d846d"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.666110 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.667260 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8","Type":"ContainerStarted","Data":"1a1cdfc84832ef860d8a3344b80e7d7ee169820390517923a48cd346d72a97ac"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.667373 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.668592 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" event={"ID":"6d85d2ab-609c-4e04-a608-b247d5d61ddc","Type":"ContainerStarted","Data":"cb91b0fba6b6569c14abab96be74cf88f6fd6c598dd1aa28f47e969884442d5d"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.670203 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" event={"ID":"16402e1f-4ee8-4845-964b-e7d8d153cd1c","Type":"ContainerStarted","Data":"2cf4b8bdc8399512b58afc83cd181a81a6d6db66cc43b4c4eee41ee7eb476803"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.670308 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.671433 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" event={"ID":"122b9557-bde1-41fd-aaed-f9120e9b00c2","Type":"ContainerStarted","Data":"32e331cb81b20a2ac1ad970062cb0f122af1e008d0fa329a846b14fb57d50867"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.671967 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.673008 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" event={"ID":"c7166108-60e3-494c-8800-aab366098632","Type":"ContainerStarted","Data":"87ff7d11d84b1ed98d0ac7cb562a8c09274bb91fdacb71747c00772223d9387c"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.673429 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.674431 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4d978c8f-2c3b-4824-8c63-8f5eb56c92f8","Type":"ContainerStarted","Data":"a50c8e3324f5acae05f338aaf0dcc982d1e84767924e8b13796162715fd53af1"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.674496 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.675668 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" event={"ID":"1c21cca2-5bc1-4174-a45c-f928fa2c2588","Type":"ContainerStarted","Data":"c6fc20f2dc0f173a317bba4888006678f4672b931b991d54aabb372ea115c965"} Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.690903 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.194416985 podStartE2EDuration="6.690885251s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:08.489425832 +0000 UTC m=+744.345353870" lastFinishedPulling="2025-09-29 09:50:11.985894108 +0000 UTC m=+747.841822136" observedRunningTime="2025-09-29 09:50:12.686014811 +0000 UTC m=+748.541942869" watchObservedRunningTime="2025-09-29 09:50:12.690885251 +0000 UTC m=+748.546813279" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.734537 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" podStartSLOduration=1.8964747210000001 podStartE2EDuration="6.734521872s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.145923516 +0000 UTC m=+743.001851544" lastFinishedPulling="2025-09-29 09:50:11.983970647 +0000 UTC m=+747.839898695" observedRunningTime="2025-09-29 09:50:12.731387579 +0000 UTC m=+748.587315607" watchObservedRunningTime="2025-09-29 09:50:12.734521872 +0000 UTC m=+748.590449900" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.736748 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.40202377 podStartE2EDuration="6.736732411s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:08.528837791 +0000 UTC m=+744.384765819" lastFinishedPulling="2025-09-29 09:50:11.863546422 +0000 UTC m=+747.719474460" observedRunningTime="2025-09-29 09:50:12.715736052 +0000 UTC m=+748.571664080" watchObservedRunningTime="2025-09-29 09:50:12.736732411 +0000 UTC m=+748.592660439" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.749475 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" podStartSLOduration=1.801332299 podStartE2EDuration="6.74945775s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.043473049 +0000 UTC m=+742.899401077" lastFinishedPulling="2025-09-29 09:50:11.9915985 +0000 UTC m=+747.847526528" observedRunningTime="2025-09-29 09:50:12.748228157 +0000 UTC m=+748.604156185" watchObservedRunningTime="2025-09-29 09:50:12.74945775 +0000 UTC m=+748.605385778" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.781579 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.7876284780000002 podStartE2EDuration="6.781557884s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.98324127 +0000 UTC m=+743.839169298" lastFinishedPulling="2025-09-29 09:50:11.977170676 +0000 UTC m=+747.833098704" observedRunningTime="2025-09-29 09:50:12.766422582 +0000 UTC m=+748.622350620" watchObservedRunningTime="2025-09-29 09:50:12.781557884 +0000 UTC m=+748.637485912" Sep 29 09:50:12 crc kubenswrapper[4991]: I0929 09:50:12.804672 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" podStartSLOduration=1.920611574 podStartE2EDuration="6.804650879s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.101022861 +0000 UTC m=+742.956950889" lastFinishedPulling="2025-09-29 09:50:11.985062166 +0000 UTC m=+747.840990194" observedRunningTime="2025-09-29 09:50:12.801112395 +0000 UTC m=+748.657040433" watchObservedRunningTime="2025-09-29 09:50:12.804650879 +0000 UTC m=+748.660578907" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.351734 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.352423 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerName="controller-manager" containerID="cri-o://f292fba7b9dfbeadaee37803acb95a823c678a961175a39b150d9ceceacbb1d9" gracePeriod=30 Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.370043 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.370291 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" containerID="cri-o://71cea7e136b315a0dc5ed00046b2fa0d482b54a71a8a00d022ec5bc110e7ae20" gracePeriod=30 Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.703907 4991 generic.go:334] "Generic (PLEG): container finished" podID="259d860c-d08f-4753-9e8b-f059fea942f5" containerID="71cea7e136b315a0dc5ed00046b2fa0d482b54a71a8a00d022ec5bc110e7ae20" exitCode=0 Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.704004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" event={"ID":"259d860c-d08f-4753-9e8b-f059fea942f5","Type":"ContainerDied","Data":"71cea7e136b315a0dc5ed00046b2fa0d482b54a71a8a00d022ec5bc110e7ae20"} Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.706880 4991 generic.go:334] "Generic (PLEG): container finished" podID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerID="f292fba7b9dfbeadaee37803acb95a823c678a961175a39b150d9ceceacbb1d9" exitCode=0 Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.707016 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" event={"ID":"00e00a41-260b-45b5-a54d-c18ded602aa6","Type":"ContainerDied","Data":"f292fba7b9dfbeadaee37803acb95a823c678a961175a39b150d9ceceacbb1d9"} Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.854337 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.889056 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.946081 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca\") pod \"259d860c-d08f-4753-9e8b-f059fea942f5\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.946158 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert\") pod \"259d860c-d08f-4753-9e8b-f059fea942f5\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.946308 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config\") pod \"259d860c-d08f-4753-9e8b-f059fea942f5\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.946334 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") pod \"259d860c-d08f-4753-9e8b-f059fea942f5\" (UID: \"259d860c-d08f-4753-9e8b-f059fea942f5\") " Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.949365 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "259d860c-d08f-4753-9e8b-f059fea942f5" (UID: "259d860c-d08f-4753-9e8b-f059fea942f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.950607 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config" (OuterVolumeSpecName: "config") pod "259d860c-d08f-4753-9e8b-f059fea942f5" (UID: "259d860c-d08f-4753-9e8b-f059fea942f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.954555 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "259d860c-d08f-4753-9e8b-f059fea942f5" (UID: "259d860c-d08f-4753-9e8b-f059fea942f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:50:14 crc kubenswrapper[4991]: I0929 09:50:14.954757 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs" (OuterVolumeSpecName: "kube-api-access-vt6xs") pod "259d860c-d08f-4753-9e8b-f059fea942f5" (UID: "259d860c-d08f-4753-9e8b-f059fea942f5"). InnerVolumeSpecName "kube-api-access-vt6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048043 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzxs6\" (UniqueName: \"kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6\") pod \"00e00a41-260b-45b5-a54d-c18ded602aa6\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048097 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert\") pod \"00e00a41-260b-45b5-a54d-c18ded602aa6\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048146 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca\") pod \"00e00a41-260b-45b5-a54d-c18ded602aa6\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048243 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles\") pod \"00e00a41-260b-45b5-a54d-c18ded602aa6\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048267 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config\") pod \"00e00a41-260b-45b5-a54d-c18ded602aa6\" (UID: \"00e00a41-260b-45b5-a54d-c18ded602aa6\") " Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048618 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259d860c-d08f-4753-9e8b-f059fea942f5-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048637 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt6xs\" (UniqueName: \"kubernetes.io/projected/259d860c-d08f-4753-9e8b-f059fea942f5-kube-api-access-vt6xs\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048647 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.048657 4991 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259d860c-d08f-4753-9e8b-f059fea942f5-client-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.049500 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config" (OuterVolumeSpecName: "config") pod "00e00a41-260b-45b5-a54d-c18ded602aa6" (UID: "00e00a41-260b-45b5-a54d-c18ded602aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.050776 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca" (OuterVolumeSpecName: "client-ca") pod "00e00a41-260b-45b5-a54d-c18ded602aa6" (UID: "00e00a41-260b-45b5-a54d-c18ded602aa6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.050898 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00e00a41-260b-45b5-a54d-c18ded602aa6" (UID: "00e00a41-260b-45b5-a54d-c18ded602aa6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.053070 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00e00a41-260b-45b5-a54d-c18ded602aa6" (UID: "00e00a41-260b-45b5-a54d-c18ded602aa6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.053154 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6" (OuterVolumeSpecName: "kube-api-access-rzxs6") pod "00e00a41-260b-45b5-a54d-c18ded602aa6" (UID: "00e00a41-260b-45b5-a54d-c18ded602aa6"). InnerVolumeSpecName "kube-api-access-rzxs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.150065 4991 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-client-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.150107 4991 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.150120 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e00a41-260b-45b5-a54d-c18ded602aa6-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.150131 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzxs6\" (UniqueName: \"kubernetes.io/projected/00e00a41-260b-45b5-a54d-c18ded602aa6-kube-api-access-rzxs6\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.150143 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e00a41-260b-45b5-a54d-c18ded602aa6-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.723403 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" event={"ID":"259d860c-d08f-4753-9e8b-f059fea942f5","Type":"ContainerDied","Data":"8ea788aaa0b3d480d665da9101a54e8da321132758dfc0ce0096ae87736a9f70"} Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.723447 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.723461 4991 scope.go:117] "RemoveContainer" containerID="71cea7e136b315a0dc5ed00046b2fa0d482b54a71a8a00d022ec5bc110e7ae20" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.727280 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" event={"ID":"00e00a41-260b-45b5-a54d-c18ded602aa6","Type":"ContainerDied","Data":"796f903293b43b4ab720878567e63b3d37e4bcf7f014995899e0b8e6064111f2"} Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.727371 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dmrh5" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.750723 4991 scope.go:117] "RemoveContainer" containerID="f292fba7b9dfbeadaee37803acb95a823c678a961175a39b150d9ceceacbb1d9" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.781686 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.791293 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q6v2d"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.797388 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.803085 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dmrh5"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.834449 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f776c5567-72c8c"] Sep 29 09:50:15 crc kubenswrapper[4991]: E0929 09:50:15.834774 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerName="controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.834792 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerName="controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: E0929 09:50:15.834821 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.834828 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.834966 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" containerName="route-controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.835006 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" containerName="controller-manager" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.835517 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.839631 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.839851 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.839874 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.840453 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.840617 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.840728 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.840887 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.842675 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.847913 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.850744 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.851854 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.851983 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.852101 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.852270 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.852617 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.852737 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.880776 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f776c5567-72c8c"] Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961602 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-config\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961679 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961721 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961831 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f216bcb1-c260-4362-b2a5-1f4cfdb72202-serving-cert\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961896 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-proxy-ca-bundles\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961931 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-client-ca\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.961974 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626vm\" (UniqueName: \"kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.962077 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:15 crc kubenswrapper[4991]: I0929 09:50:15.962130 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbczx\" (UniqueName: \"kubernetes.io/projected/f216bcb1-c260-4362-b2a5-1f4cfdb72202-kube-api-access-tbczx\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.063240 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbczx\" (UniqueName: \"kubernetes.io/projected/f216bcb1-c260-4362-b2a5-1f4cfdb72202-kube-api-access-tbczx\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.063528 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-config\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.063636 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.063778 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.064500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f216bcb1-c260-4362-b2a5-1f4cfdb72202-serving-cert\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.064910 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-proxy-ca-bundles\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.065037 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-client-ca\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.065137 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626vm\" (UniqueName: \"kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.064924 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-config\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.064682 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.065411 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.065810 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-client-ca\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.065932 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f216bcb1-c260-4362-b2a5-1f4cfdb72202-proxy-ca-bundles\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.066329 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.069662 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f216bcb1-c260-4362-b2a5-1f4cfdb72202-serving-cert\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.071639 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.082482 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbczx\" (UniqueName: \"kubernetes.io/projected/f216bcb1-c260-4362-b2a5-1f4cfdb72202-kube-api-access-tbczx\") pod \"controller-manager-f776c5567-72c8c\" (UID: \"f216bcb1-c260-4362-b2a5-1f4cfdb72202\") " pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.092815 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626vm\" (UniqueName: \"kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm\") pod \"route-controller-manager-7b598f4d67-4rsqq\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.207468 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.214240 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.444918 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.663715 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:16 crc kubenswrapper[4991]: W0929 09:50:16.664571 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod171e5edf_f171_4efc_a233_1e9d599ce0bb.slice/crio-fc3eb1d874f66f361bfbf6475f6ddb33ffa34b3347bfe64e3351276fb73b2eb3 WatchSource:0}: Error finding container fc3eb1d874f66f361bfbf6475f6ddb33ffa34b3347bfe64e3351276fb73b2eb3: Status 404 returned error can't find the container with id fc3eb1d874f66f361bfbf6475f6ddb33ffa34b3347bfe64e3351276fb73b2eb3 Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.747509 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" event={"ID":"6d85d2ab-609c-4e04-a608-b247d5d61ddc","Type":"ContainerStarted","Data":"9318c4362d108c0af762c110bddeff64a638ed16c619ef744ab66718b2a8a9aa"} Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.748001 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.749744 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" event={"ID":"171e5edf-f171-4efc-a233-1e9d599ce0bb","Type":"ContainerStarted","Data":"fc3eb1d874f66f361bfbf6475f6ddb33ffa34b3347bfe64e3351276fb73b2eb3"} Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.752262 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" event={"ID":"1c21cca2-5bc1-4174-a45c-f928fa2c2588","Type":"ContainerStarted","Data":"36ed806f5336e5acb799c4f8a51fd1a70603e91a893abcb1d7f51bd29c91f8bb"} Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.753500 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.753546 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.762663 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.764890 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.772551 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" podStartSLOduration=2.6865280780000003 podStartE2EDuration="10.772507811s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.518860751 +0000 UTC m=+743.374788779" lastFinishedPulling="2025-09-29 09:50:15.604840484 +0000 UTC m=+751.460768512" observedRunningTime="2025-09-29 09:50:16.771163686 +0000 UTC m=+752.627091714" watchObservedRunningTime="2025-09-29 09:50:16.772507811 +0000 UTC m=+752.628435839" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.774873 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.797868 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f776c5567-72c8c"] Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.825487 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-78756f9d5f-2vl5n" podStartSLOduration=2.7933840720000003 podStartE2EDuration="10.825467881s" podCreationTimestamp="2025-09-29 09:50:06 +0000 UTC" firstStartedPulling="2025-09-29 09:50:07.576915156 +0000 UTC m=+743.432843184" lastFinishedPulling="2025-09-29 09:50:15.608998965 +0000 UTC m=+751.464926993" observedRunningTime="2025-09-29 09:50:16.802471729 +0000 UTC m=+752.658399767" watchObservedRunningTime="2025-09-29 09:50:16.825467881 +0000 UTC m=+752.681395909" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.939900 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e00a41-260b-45b5-a54d-c18ded602aa6" path="/var/lib/kubelet/pods/00e00a41-260b-45b5-a54d-c18ded602aa6/volumes" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.943453 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259d860c-d08f-4753-9e8b-f059fea942f5" path="/var/lib/kubelet/pods/259d860c-d08f-4753-9e8b-f059fea942f5/volumes" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.985629 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:16 crc kubenswrapper[4991]: I0929 09:50:16.999893 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-78756f9d5f-b9nwx" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.766431 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" event={"ID":"f216bcb1-c260-4362-b2a5-1f4cfdb72202","Type":"ContainerStarted","Data":"3e69cfeb236b4cd515ee3674f7646dc64cf408beb4132f5c07f50c33af687d59"} Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.766489 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" event={"ID":"f216bcb1-c260-4362-b2a5-1f4cfdb72202","Type":"ContainerStarted","Data":"dfd210fd8b9c901c74b02247f39ff9fdad0ad2d2409e6c94fab86e2f44827349"} Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.766657 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.768062 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" event={"ID":"171e5edf-f171-4efc-a233-1e9d599ce0bb","Type":"ContainerStarted","Data":"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2"} Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.768147 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" podUID="171e5edf-f171-4efc-a233-1e9d599ce0bb" containerName="route-controller-manager" containerID="cri-o://38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2" gracePeriod=30 Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.768555 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.772852 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.773355 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.789614 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f776c5567-72c8c" podStartSLOduration=3.789597901 podStartE2EDuration="3.789597901s" podCreationTimestamp="2025-09-29 09:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:50:17.788293666 +0000 UTC m=+753.644221754" watchObservedRunningTime="2025-09-29 09:50:17.789597901 +0000 UTC m=+753.645525929" Sep 29 09:50:17 crc kubenswrapper[4991]: I0929 09:50:17.828548 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" podStartSLOduration=3.828528357 podStartE2EDuration="3.828528357s" podCreationTimestamp="2025-09-29 09:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:50:17.827566221 +0000 UTC m=+753.683494269" watchObservedRunningTime="2025-09-29 09:50:17.828528357 +0000 UTC m=+753.684456385" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.283553 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.328555 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj"] Sep 29 09:50:18 crc kubenswrapper[4991]: E0929 09:50:18.329071 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="171e5edf-f171-4efc-a233-1e9d599ce0bb" containerName="route-controller-manager" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.329136 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="171e5edf-f171-4efc-a233-1e9d599ce0bb" containerName="route-controller-manager" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.329332 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="171e5edf-f171-4efc-a233-1e9d599ce0bb" containerName="route-controller-manager" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.329842 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.365705 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj"] Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409014 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert\") pod \"171e5edf-f171-4efc-a233-1e9d599ce0bb\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409074 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-626vm\" (UniqueName: \"kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm\") pod \"171e5edf-f171-4efc-a233-1e9d599ce0bb\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409266 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca\") pod \"171e5edf-f171-4efc-a233-1e9d599ce0bb\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409291 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config\") pod \"171e5edf-f171-4efc-a233-1e9d599ce0bb\" (UID: \"171e5edf-f171-4efc-a233-1e9d599ce0bb\") " Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409503 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9s2r\" (UniqueName: \"kubernetes.io/projected/32649ae9-182f-4313-8e64-d93bd1665882-kube-api-access-x9s2r\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409553 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-client-ca\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409616 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-config\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.409816 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32649ae9-182f-4313-8e64-d93bd1665882-serving-cert\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.410259 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "171e5edf-f171-4efc-a233-1e9d599ce0bb" (UID: "171e5edf-f171-4efc-a233-1e9d599ce0bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.410338 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config" (OuterVolumeSpecName: "config") pod "171e5edf-f171-4efc-a233-1e9d599ce0bb" (UID: "171e5edf-f171-4efc-a233-1e9d599ce0bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.419262 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm" (OuterVolumeSpecName: "kube-api-access-626vm") pod "171e5edf-f171-4efc-a233-1e9d599ce0bb" (UID: "171e5edf-f171-4efc-a233-1e9d599ce0bb"). InnerVolumeSpecName "kube-api-access-626vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.431041 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "171e5edf-f171-4efc-a233-1e9d599ce0bb" (UID: "171e5edf-f171-4efc-a233-1e9d599ce0bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511361 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32649ae9-182f-4313-8e64-d93bd1665882-serving-cert\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511423 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9s2r\" (UniqueName: \"kubernetes.io/projected/32649ae9-182f-4313-8e64-d93bd1665882-kube-api-access-x9s2r\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511457 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-client-ca\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511509 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-config\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511573 4991 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-client-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511584 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171e5edf-f171-4efc-a233-1e9d599ce0bb-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511592 4991 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171e5edf-f171-4efc-a233-1e9d599ce0bb-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.511601 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-626vm\" (UniqueName: \"kubernetes.io/projected/171e5edf-f171-4efc-a233-1e9d599ce0bb-kube-api-access-626vm\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.512652 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-config\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.513054 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32649ae9-182f-4313-8e64-d93bd1665882-client-ca\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.517077 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32649ae9-182f-4313-8e64-d93bd1665882-serving-cert\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.530069 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9s2r\" (UniqueName: \"kubernetes.io/projected/32649ae9-182f-4313-8e64-d93bd1665882-kube-api-access-x9s2r\") pod \"route-controller-manager-767d565dc4-hhndj\" (UID: \"32649ae9-182f-4313-8e64-d93bd1665882\") " pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.645535 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.776770 4991 generic.go:334] "Generic (PLEG): container finished" podID="171e5edf-f171-4efc-a233-1e9d599ce0bb" containerID="38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2" exitCode=0 Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.776821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" event={"ID":"171e5edf-f171-4efc-a233-1e9d599ce0bb","Type":"ContainerDied","Data":"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2"} Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.776835 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.777302 4991 scope.go:117] "RemoveContainer" containerID="38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.777287 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq" event={"ID":"171e5edf-f171-4efc-a233-1e9d599ce0bb","Type":"ContainerDied","Data":"fc3eb1d874f66f361bfbf6475f6ddb33ffa34b3347bfe64e3351276fb73b2eb3"} Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.814279 4991 scope.go:117] "RemoveContainer" containerID="38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2" Sep 29 09:50:18 crc kubenswrapper[4991]: E0929 09:50:18.814900 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2\": container with ID starting with 38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2 not found: ID does not exist" containerID="38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.814961 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2"} err="failed to get container status \"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2\": rpc error: code = NotFound desc = could not find container \"38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2\": container with ID starting with 38ee8c62163b17f6c7d469454faaf794b9a7463e95918fdf67e7a90ac15c4db2 not found: ID does not exist" Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.824683 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.830800 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b598f4d67-4rsqq"] Sep 29 09:50:18 crc kubenswrapper[4991]: I0929 09:50:18.935474 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171e5edf-f171-4efc-a233-1e9d599ce0bb" path="/var/lib/kubelet/pods/171e5edf-f171-4efc-a233-1e9d599ce0bb/volumes" Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.102347 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj"] Sep 29 09:50:19 crc kubenswrapper[4991]: W0929 09:50:19.107103 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32649ae9_182f_4313_8e64_d93bd1665882.slice/crio-f87fe0dbdb8b004e9ae85bb51824cc01bbb8aa0fdfa7c9aa98cd1fbbe341299f WatchSource:0}: Error finding container f87fe0dbdb8b004e9ae85bb51824cc01bbb8aa0fdfa7c9aa98cd1fbbe341299f: Status 404 returned error can't find the container with id f87fe0dbdb8b004e9ae85bb51824cc01bbb8aa0fdfa7c9aa98cd1fbbe341299f Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.786881 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" event={"ID":"32649ae9-182f-4313-8e64-d93bd1665882","Type":"ContainerStarted","Data":"1d20d8de5c03ac509417a203957adff6b3605fcedbb6b22a10702466d5d67e16"} Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.786941 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" event={"ID":"32649ae9-182f-4313-8e64-d93bd1665882","Type":"ContainerStarted","Data":"f87fe0dbdb8b004e9ae85bb51824cc01bbb8aa0fdfa7c9aa98cd1fbbe341299f"} Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.788311 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.803569 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" podStartSLOduration=3.80353806 podStartE2EDuration="3.80353806s" podCreationTimestamp="2025-09-29 09:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:50:19.802504213 +0000 UTC m=+755.658432241" watchObservedRunningTime="2025-09-29 09:50:19.80353806 +0000 UTC m=+755.659466088" Sep 29 09:50:19 crc kubenswrapper[4991]: I0929 09:50:19.903898 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-767d565dc4-hhndj" Sep 29 09:50:20 crc kubenswrapper[4991]: I0929 09:50:20.441941 4991 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.223726 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.225467 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.240374 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.375493 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.375569 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4cz\" (UniqueName: \"kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.375637 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.476629 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.476769 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.476841 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td4cz\" (UniqueName: \"kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.477240 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.477282 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.499772 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td4cz\" (UniqueName: \"kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz\") pod \"redhat-operators-gph47\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.544467 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:22 crc kubenswrapper[4991]: I0929 09:50:22.979342 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:23 crc kubenswrapper[4991]: I0929 09:50:23.818243 4991 generic.go:334] "Generic (PLEG): container finished" podID="40d10465-c994-4575-9f37-21cc1040e17b" containerID="a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1" exitCode=0 Sep 29 09:50:23 crc kubenswrapper[4991]: I0929 09:50:23.818563 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerDied","Data":"a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1"} Sep 29 09:50:23 crc kubenswrapper[4991]: I0929 09:50:23.818599 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerStarted","Data":"ec28b80fac11fc7164d030137d9f31bff766904bd90b9fb171a81c34ab3ba95a"} Sep 29 09:50:26 crc kubenswrapper[4991]: I0929 09:50:26.839665 4991 generic.go:334] "Generic (PLEG): container finished" podID="40d10465-c994-4575-9f37-21cc1040e17b" containerID="bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08" exitCode=0 Sep 29 09:50:26 crc kubenswrapper[4991]: I0929 09:50:26.839735 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerDied","Data":"bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08"} Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.743045 4991 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.743136 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.848710 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerStarted","Data":"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1"} Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.868128 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gph47" podStartSLOduration=3.421884979 podStartE2EDuration="5.868113674s" podCreationTimestamp="2025-09-29 09:50:22 +0000 UTC" firstStartedPulling="2025-09-29 09:50:24.827225273 +0000 UTC m=+760.683153301" lastFinishedPulling="2025-09-29 09:50:27.273453968 +0000 UTC m=+763.129381996" observedRunningTime="2025-09-29 09:50:27.862755142 +0000 UTC m=+763.718683170" watchObservedRunningTime="2025-09-29 09:50:27.868113674 +0000 UTC m=+763.724041692" Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.981768 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Sep 29 09:50:27 crc kubenswrapper[4991]: I0929 09:50:27.990161 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Sep 29 09:50:32 crc kubenswrapper[4991]: I0929 09:50:32.545352 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:32 crc kubenswrapper[4991]: I0929 09:50:32.545705 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:32 crc kubenswrapper[4991]: I0929 09:50:32.586226 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:32 crc kubenswrapper[4991]: I0929 09:50:32.935230 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:32 crc kubenswrapper[4991]: I0929 09:50:32.993974 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:34 crc kubenswrapper[4991]: I0929 09:50:34.896933 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gph47" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="registry-server" containerID="cri-o://1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1" gracePeriod=2 Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.226857 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.229022 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.254644 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.376159 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.376426 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt7j9\" (UniqueName: \"kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.376617 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.437231 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.478507 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.478781 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt7j9\" (UniqueName: \"kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.478921 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.479231 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.479312 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.500280 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt7j9\" (UniqueName: \"kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9\") pod \"redhat-marketplace-vtf9h\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.559016 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.579793 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td4cz\" (UniqueName: \"kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz\") pod \"40d10465-c994-4575-9f37-21cc1040e17b\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.580079 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities\") pod \"40d10465-c994-4575-9f37-21cc1040e17b\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.580195 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content\") pod \"40d10465-c994-4575-9f37-21cc1040e17b\" (UID: \"40d10465-c994-4575-9f37-21cc1040e17b\") " Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.581978 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities" (OuterVolumeSpecName: "utilities") pod "40d10465-c994-4575-9f37-21cc1040e17b" (UID: "40d10465-c994-4575-9f37-21cc1040e17b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.589827 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz" (OuterVolumeSpecName: "kube-api-access-td4cz") pod "40d10465-c994-4575-9f37-21cc1040e17b" (UID: "40d10465-c994-4575-9f37-21cc1040e17b"). InnerVolumeSpecName "kube-api-access-td4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.669262 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40d10465-c994-4575-9f37-21cc1040e17b" (UID: "40d10465-c994-4575-9f37-21cc1040e17b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.682315 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td4cz\" (UniqueName: \"kubernetes.io/projected/40d10465-c994-4575-9f37-21cc1040e17b-kube-api-access-td4cz\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.682346 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.682358 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d10465-c994-4575-9f37-21cc1040e17b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.904881 4991 generic.go:334] "Generic (PLEG): container finished" podID="40d10465-c994-4575-9f37-21cc1040e17b" containerID="1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1" exitCode=0 Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.904963 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerDied","Data":"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1"} Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.905044 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gph47" event={"ID":"40d10465-c994-4575-9f37-21cc1040e17b","Type":"ContainerDied","Data":"ec28b80fac11fc7164d030137d9f31bff766904bd90b9fb171a81c34ab3ba95a"} Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.905068 4991 scope.go:117] "RemoveContainer" containerID="1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.904991 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gph47" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.924454 4991 scope.go:117] "RemoveContainer" containerID="bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.940523 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.945835 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gph47"] Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.963397 4991 scope.go:117] "RemoveContainer" containerID="a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.986051 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.986446 4991 scope.go:117] "RemoveContainer" containerID="1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1" Sep 29 09:50:35 crc kubenswrapper[4991]: E0929 09:50:35.987511 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1\": container with ID starting with 1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1 not found: ID does not exist" containerID="1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.987558 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1"} err="failed to get container status \"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1\": rpc error: code = NotFound desc = could not find container \"1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1\": container with ID starting with 1fc54447b94da874b8ea5d23a0fd1f52923e856b145f2c62b199849b65b0f4e1 not found: ID does not exist" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.987589 4991 scope.go:117] "RemoveContainer" containerID="bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08" Sep 29 09:50:35 crc kubenswrapper[4991]: E0929 09:50:35.987899 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08\": container with ID starting with bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08 not found: ID does not exist" containerID="bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.987939 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08"} err="failed to get container status \"bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08\": rpc error: code = NotFound desc = could not find container \"bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08\": container with ID starting with bab3ce63d58441866b95f9b2967daa03ce83c14acd5350ce4a8c139a74d8ba08 not found: ID does not exist" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.988049 4991 scope.go:117] "RemoveContainer" containerID="a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1" Sep 29 09:50:35 crc kubenswrapper[4991]: E0929 09:50:35.988347 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1\": container with ID starting with a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1 not found: ID does not exist" containerID="a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1" Sep 29 09:50:35 crc kubenswrapper[4991]: I0929 09:50:35.988380 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1"} err="failed to get container status \"a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1\": rpc error: code = NotFound desc = could not find container \"a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1\": container with ID starting with a6b89782288ddfdf5a02279e05f074fdaaf7e497a3b008daae45f35583d5dac1 not found: ID does not exist" Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.569849 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-67c9b4c785-f87nt" Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.763296 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-7454676c57-qpzss" Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.862789 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6b467cdd84-2pdst" Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.915574 4991 generic.go:334] "Generic (PLEG): container finished" podID="ff929c54-073d-44f6-91af-abe6fefb1989" containerID="e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db" exitCode=0 Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.915634 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerDied","Data":"e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db"} Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.915677 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerStarted","Data":"88310fb0ebb76faac608bd15fac5777bb000a32389a70ce01544f14fc09573b0"} Sep 29 09:50:36 crc kubenswrapper[4991]: I0929 09:50:36.938387 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d10465-c994-4575-9f37-21cc1040e17b" path="/var/lib/kubelet/pods/40d10465-c994-4575-9f37-21cc1040e17b/volumes" Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.745004 4991 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.745589 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.925653 4991 generic.go:334] "Generic (PLEG): container finished" podID="ff929c54-073d-44f6-91af-abe6fefb1989" containerID="4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede" exitCode=0 Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.925708 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerDied","Data":"4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede"} Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.947437 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:50:37 crc kubenswrapper[4991]: I0929 09:50:37.947673 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:50:38 crc kubenswrapper[4991]: I0929 09:50:38.936132 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerStarted","Data":"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523"} Sep 29 09:50:38 crc kubenswrapper[4991]: I0929 09:50:38.961685 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vtf9h" podStartSLOduration=2.521643989 podStartE2EDuration="3.961667885s" podCreationTimestamp="2025-09-29 09:50:35 +0000 UTC" firstStartedPulling="2025-09-29 09:50:36.918301741 +0000 UTC m=+772.774229769" lastFinishedPulling="2025-09-29 09:50:38.358325637 +0000 UTC m=+774.214253665" observedRunningTime="2025-09-29 09:50:38.959929888 +0000 UTC m=+774.815857926" watchObservedRunningTime="2025-09-29 09:50:38.961667885 +0000 UTC m=+774.817595913" Sep 29 09:50:45 crc kubenswrapper[4991]: I0929 09:50:45.559288 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:45 crc kubenswrapper[4991]: I0929 09:50:45.559822 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:45 crc kubenswrapper[4991]: I0929 09:50:45.612323 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:46 crc kubenswrapper[4991]: I0929 09:50:46.043598 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:46 crc kubenswrapper[4991]: I0929 09:50:46.099551 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:47 crc kubenswrapper[4991]: I0929 09:50:47.740914 4991 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Sep 29 09:50:47 crc kubenswrapper[4991]: I0929 09:50:47.741231 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.001445 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vtf9h" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="registry-server" containerID="cri-o://efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523" gracePeriod=2 Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.504582 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.587552 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content\") pod \"ff929c54-073d-44f6-91af-abe6fefb1989\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.587603 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities\") pod \"ff929c54-073d-44f6-91af-abe6fefb1989\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.587688 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt7j9\" (UniqueName: \"kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9\") pod \"ff929c54-073d-44f6-91af-abe6fefb1989\" (UID: \"ff929c54-073d-44f6-91af-abe6fefb1989\") " Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.588654 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities" (OuterVolumeSpecName: "utilities") pod "ff929c54-073d-44f6-91af-abe6fefb1989" (UID: "ff929c54-073d-44f6-91af-abe6fefb1989"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.593109 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9" (OuterVolumeSpecName: "kube-api-access-xt7j9") pod "ff929c54-073d-44f6-91af-abe6fefb1989" (UID: "ff929c54-073d-44f6-91af-abe6fefb1989"). InnerVolumeSpecName "kube-api-access-xt7j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.601202 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff929c54-073d-44f6-91af-abe6fefb1989" (UID: "ff929c54-073d-44f6-91af-abe6fefb1989"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.689237 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.689289 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff929c54-073d-44f6-91af-abe6fefb1989-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:48 crc kubenswrapper[4991]: I0929 09:50:48.689304 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt7j9\" (UniqueName: \"kubernetes.io/projected/ff929c54-073d-44f6-91af-abe6fefb1989-kube-api-access-xt7j9\") on node \"crc\" DevicePath \"\"" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.022892 4991 generic.go:334] "Generic (PLEG): container finished" podID="ff929c54-073d-44f6-91af-abe6fefb1989" containerID="efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523" exitCode=0 Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.022947 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerDied","Data":"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523"} Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.022997 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtf9h" event={"ID":"ff929c54-073d-44f6-91af-abe6fefb1989","Type":"ContainerDied","Data":"88310fb0ebb76faac608bd15fac5777bb000a32389a70ce01544f14fc09573b0"} Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.023023 4991 scope.go:117] "RemoveContainer" containerID="efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.023125 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtf9h" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.060267 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.065006 4991 scope.go:117] "RemoveContainer" containerID="4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.072137 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtf9h"] Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.083255 4991 scope.go:117] "RemoveContainer" containerID="e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.109021 4991 scope.go:117] "RemoveContainer" containerID="efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523" Sep 29 09:50:49 crc kubenswrapper[4991]: E0929 09:50:49.109727 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523\": container with ID starting with efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523 not found: ID does not exist" containerID="efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.109785 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523"} err="failed to get container status \"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523\": rpc error: code = NotFound desc = could not find container \"efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523\": container with ID starting with efc7875233e640fd2b2c0c7e907523869f52d9a356d2e9f3253df0b056e2b523 not found: ID does not exist" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.109817 4991 scope.go:117] "RemoveContainer" containerID="4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede" Sep 29 09:50:49 crc kubenswrapper[4991]: E0929 09:50:49.110284 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede\": container with ID starting with 4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede not found: ID does not exist" containerID="4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.110317 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede"} err="failed to get container status \"4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede\": rpc error: code = NotFound desc = could not find container \"4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede\": container with ID starting with 4afdc0a701b94ab101687bcbabebe1d7daa1d32c46f622635c904431b2637ede not found: ID does not exist" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.110340 4991 scope.go:117] "RemoveContainer" containerID="e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db" Sep 29 09:50:49 crc kubenswrapper[4991]: E0929 09:50:49.110923 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db\": container with ID starting with e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db not found: ID does not exist" containerID="e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db" Sep 29 09:50:49 crc kubenswrapper[4991]: I0929 09:50:49.110980 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db"} err="failed to get container status \"e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db\": rpc error: code = NotFound desc = could not find container \"e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db\": container with ID starting with e01564ab501aad375fcd447be784bdd027350f3408866875f2e47d0e176d70db not found: ID does not exist" Sep 29 09:50:50 crc kubenswrapper[4991]: I0929 09:50:50.939883 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" path="/var/lib/kubelet/pods/ff929c54-073d-44f6-91af-abe6fefb1989/volumes" Sep 29 09:50:57 crc kubenswrapper[4991]: I0929 09:50:57.741044 4991 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Sep 29 09:50:57 crc kubenswrapper[4991]: I0929 09:50:57.741661 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.197615 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.198298 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="extract-utilities" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.200482 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="extract-utilities" Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.200701 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="extract-content" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.200778 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="extract-content" Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.200878 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="extract-utilities" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.200984 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="extract-utilities" Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.201062 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="extract-content" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.201123 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="extract-content" Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.201194 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.201291 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: E0929 09:50:58.201389 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.201454 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.201721 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff929c54-073d-44f6-91af-abe6fefb1989" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.201804 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d10465-c994-4575-9f37-21cc1040e17b" containerName="registry-server" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.203083 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.207643 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.348460 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.348529 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqn8d\" (UniqueName: \"kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.348596 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.451483 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.451549 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqn8d\" (UniqueName: \"kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.451602 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.452139 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.452176 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.473124 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqn8d\" (UniqueName: \"kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d\") pod \"community-operators-4xx7l\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:58 crc kubenswrapper[4991]: I0929 09:50:58.521648 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:50:59 crc kubenswrapper[4991]: I0929 09:50:59.023836 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:50:59 crc kubenswrapper[4991]: I0929 09:50:59.101578 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerStarted","Data":"b529db90d89bbe10180599238e86f7672ad4cddb3f88b7a665ca9c8289788725"} Sep 29 09:51:00 crc kubenswrapper[4991]: I0929 09:51:00.108859 4991 generic.go:334] "Generic (PLEG): container finished" podID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerID="14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743" exitCode=0 Sep 29 09:51:00 crc kubenswrapper[4991]: I0929 09:51:00.108904 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerDied","Data":"14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743"} Sep 29 09:51:02 crc kubenswrapper[4991]: I0929 09:51:02.133753 4991 generic.go:334] "Generic (PLEG): container finished" podID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerID="0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4" exitCode=0 Sep 29 09:51:02 crc kubenswrapper[4991]: I0929 09:51:02.133846 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerDied","Data":"0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4"} Sep 29 09:51:03 crc kubenswrapper[4991]: I0929 09:51:03.155890 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerStarted","Data":"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb"} Sep 29 09:51:03 crc kubenswrapper[4991]: I0929 09:51:03.183831 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4xx7l" podStartSLOduration=2.375468344 podStartE2EDuration="5.183807256s" podCreationTimestamp="2025-09-29 09:50:58 +0000 UTC" firstStartedPulling="2025-09-29 09:51:00.11119537 +0000 UTC m=+795.967123438" lastFinishedPulling="2025-09-29 09:51:02.919534302 +0000 UTC m=+798.775462350" observedRunningTime="2025-09-29 09:51:03.176937433 +0000 UTC m=+799.032865511" watchObservedRunningTime="2025-09-29 09:51:03.183807256 +0000 UTC m=+799.039735284" Sep 29 09:51:07 crc kubenswrapper[4991]: I0929 09:51:07.741579 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Sep 29 09:51:07 crc kubenswrapper[4991]: I0929 09:51:07.947404 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:51:07 crc kubenswrapper[4991]: I0929 09:51:07.947479 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:51:08 crc kubenswrapper[4991]: I0929 09:51:08.522339 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:08 crc kubenswrapper[4991]: I0929 09:51:08.522694 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:08 crc kubenswrapper[4991]: I0929 09:51:08.570461 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:09 crc kubenswrapper[4991]: I0929 09:51:09.247873 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:09 crc kubenswrapper[4991]: I0929 09:51:09.300437 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:51:11 crc kubenswrapper[4991]: I0929 09:51:11.244032 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4xx7l" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="registry-server" containerID="cri-o://a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb" gracePeriod=2 Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.207419 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.251763 4991 generic.go:334] "Generic (PLEG): container finished" podID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerID="a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb" exitCode=0 Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.251805 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerDied","Data":"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb"} Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.251830 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4xx7l" event={"ID":"9dbdca3b-3f19-45dc-9814-a6eb0a08efed","Type":"ContainerDied","Data":"b529db90d89bbe10180599238e86f7672ad4cddb3f88b7a665ca9c8289788725"} Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.251846 4991 scope.go:117] "RemoveContainer" containerID="a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.251963 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4xx7l" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.271527 4991 scope.go:117] "RemoveContainer" containerID="0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.289871 4991 scope.go:117] "RemoveContainer" containerID="14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.309561 4991 scope.go:117] "RemoveContainer" containerID="a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb" Sep 29 09:51:12 crc kubenswrapper[4991]: E0929 09:51:12.310048 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb\": container with ID starting with a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb not found: ID does not exist" containerID="a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.310084 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb"} err="failed to get container status \"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb\": rpc error: code = NotFound desc = could not find container \"a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb\": container with ID starting with a1b7d17c980b7bb27e5f4951127d60589f1085dd3cb2fffc9544cb6e4ede58cb not found: ID does not exist" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.310111 4991 scope.go:117] "RemoveContainer" containerID="0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4" Sep 29 09:51:12 crc kubenswrapper[4991]: E0929 09:51:12.310470 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4\": container with ID starting with 0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4 not found: ID does not exist" containerID="0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.310496 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4"} err="failed to get container status \"0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4\": rpc error: code = NotFound desc = could not find container \"0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4\": container with ID starting with 0b92f02b0566a437cf6fcef45015e2acb87a93d13ba962c43c068dd68e1844d4 not found: ID does not exist" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.310514 4991 scope.go:117] "RemoveContainer" containerID="14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743" Sep 29 09:51:12 crc kubenswrapper[4991]: E0929 09:51:12.310803 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743\": container with ID starting with 14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743 not found: ID does not exist" containerID="14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.310825 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743"} err="failed to get container status \"14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743\": rpc error: code = NotFound desc = could not find container \"14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743\": container with ID starting with 14def43f320907cfb146cdaa152937301cc4b38de2ce4870c6ac1d58fc846743 not found: ID does not exist" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.317380 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqn8d\" (UniqueName: \"kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d\") pod \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.317468 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content\") pod \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.317498 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities\") pod \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\" (UID: \"9dbdca3b-3f19-45dc-9814-a6eb0a08efed\") " Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.318937 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities" (OuterVolumeSpecName: "utilities") pod "9dbdca3b-3f19-45dc-9814-a6eb0a08efed" (UID: "9dbdca3b-3f19-45dc-9814-a6eb0a08efed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.324973 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d" (OuterVolumeSpecName: "kube-api-access-sqn8d") pod "9dbdca3b-3f19-45dc-9814-a6eb0a08efed" (UID: "9dbdca3b-3f19-45dc-9814-a6eb0a08efed"). InnerVolumeSpecName "kube-api-access-sqn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.418913 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.418969 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqn8d\" (UniqueName: \"kubernetes.io/projected/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-kube-api-access-sqn8d\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.759542 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dbdca3b-3f19-45dc-9814-a6eb0a08efed" (UID: "9dbdca3b-3f19-45dc-9814-a6eb0a08efed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.824528 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbdca3b-3f19-45dc-9814-a6eb0a08efed-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.879111 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.885990 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4xx7l"] Sep 29 09:51:12 crc kubenswrapper[4991]: I0929 09:51:12.936024 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" path="/var/lib/kubelet/pods/9dbdca3b-3f19-45dc-9814-a6eb0a08efed/volumes" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.519936 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-n9pvm"] Sep 29 09:51:27 crc kubenswrapper[4991]: E0929 09:51:27.520872 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="registry-server" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.520889 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="registry-server" Sep 29 09:51:27 crc kubenswrapper[4991]: E0929 09:51:27.520902 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="extract-utilities" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.520910 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="extract-utilities" Sep 29 09:51:27 crc kubenswrapper[4991]: E0929 09:51:27.520923 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="extract-content" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.520931 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="extract-content" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.521115 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbdca3b-3f19-45dc-9814-a6eb0a08efed" containerName="registry-server" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.521791 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.524049 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.525253 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.527562 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.527641 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-lj95w" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.528769 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.546697 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.563224 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-n9pvm"] Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.700807 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config-openshift-service-cacrt\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.700868 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.700892 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-sa-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701113 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-tmp\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701174 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701204 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-datadir\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701243 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqbs\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-kube-api-access-gbqbs\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701265 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-entrypoint\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701294 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-syslog-receiver\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701356 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-trusted-ca\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.701387 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803101 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqbs\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-kube-api-access-gbqbs\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803158 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-entrypoint\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803220 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-syslog-receiver\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803262 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-trusted-ca\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803287 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803342 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config-openshift-service-cacrt\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803383 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803408 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-sa-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803464 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-tmp\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803491 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803513 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-datadir\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.803620 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-datadir\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: E0929 09:51:27.804782 4991 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Sep 29 09:51:27 crc kubenswrapper[4991]: E0929 09:51:27.804833 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics podName:ff5dec81-79d1-4e4d-8fe3-77362d3c2e73 nodeName:}" failed. No retries permitted until 2025-09-29 09:51:28.304819492 +0000 UTC m=+824.160747510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics") pod "collector-n9pvm" (UID: "ff5dec81-79d1-4e4d-8fe3-77362d3c2e73") : secret "collector-metrics" not found Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.804985 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-trusted-ca\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.805065 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.805152 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-config-openshift-service-cacrt\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.805555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-entrypoint\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.813251 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-tmp\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.813508 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-syslog-receiver\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.824177 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqbs\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-kube-api-access-gbqbs\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.825275 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-sa-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:27 crc kubenswrapper[4991]: I0929 09:51:27.830451 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-collector-token\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:28 crc kubenswrapper[4991]: I0929 09:51:28.313525 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:28 crc kubenswrapper[4991]: I0929 09:51:28.317299 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/ff5dec81-79d1-4e4d-8fe3-77362d3c2e73-metrics\") pod \"collector-n9pvm\" (UID: \"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73\") " pod="openshift-logging/collector-n9pvm" Sep 29 09:51:28 crc kubenswrapper[4991]: I0929 09:51:28.453002 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-n9pvm" Sep 29 09:51:28 crc kubenswrapper[4991]: I0929 09:51:28.876073 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-n9pvm"] Sep 29 09:51:28 crc kubenswrapper[4991]: W0929 09:51:28.889114 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5dec81_79d1_4e4d_8fe3_77362d3c2e73.slice/crio-9a905031f530b77e5dee3d7e31530494cc6779ed231239235a8e4759fd1158a1 WatchSource:0}: Error finding container 9a905031f530b77e5dee3d7e31530494cc6779ed231239235a8e4759fd1158a1: Status 404 returned error can't find the container with id 9a905031f530b77e5dee3d7e31530494cc6779ed231239235a8e4759fd1158a1 Sep 29 09:51:29 crc kubenswrapper[4991]: I0929 09:51:29.362722 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-n9pvm" event={"ID":"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73","Type":"ContainerStarted","Data":"9a905031f530b77e5dee3d7e31530494cc6779ed231239235a8e4759fd1158a1"} Sep 29 09:51:36 crc kubenswrapper[4991]: I0929 09:51:36.412255 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-n9pvm" event={"ID":"ff5dec81-79d1-4e4d-8fe3-77362d3c2e73","Type":"ContainerStarted","Data":"9199a95094b8de7b383566978edc8030efe8f30286b4ef38387ec3089ba41a08"} Sep 29 09:51:36 crc kubenswrapper[4991]: I0929 09:51:36.435751 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-n9pvm" podStartSLOduration=2.297211411 podStartE2EDuration="9.435734139s" podCreationTimestamp="2025-09-29 09:51:27 +0000 UTC" firstStartedPulling="2025-09-29 09:51:28.892676655 +0000 UTC m=+824.748604713" lastFinishedPulling="2025-09-29 09:51:36.031199383 +0000 UTC m=+831.887127441" observedRunningTime="2025-09-29 09:51:36.429921445 +0000 UTC m=+832.285849473" watchObservedRunningTime="2025-09-29 09:51:36.435734139 +0000 UTC m=+832.291662167" Sep 29 09:51:37 crc kubenswrapper[4991]: I0929 09:51:37.946468 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:51:37 crc kubenswrapper[4991]: I0929 09:51:37.946789 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:51:37 crc kubenswrapper[4991]: I0929 09:51:37.946835 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:51:37 crc kubenswrapper[4991]: I0929 09:51:37.947545 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:51:37 crc kubenswrapper[4991]: I0929 09:51:37.947608 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c" gracePeriod=600 Sep 29 09:51:38 crc kubenswrapper[4991]: I0929 09:51:38.428387 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c" exitCode=0 Sep 29 09:51:38 crc kubenswrapper[4991]: I0929 09:51:38.428539 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c"} Sep 29 09:51:38 crc kubenswrapper[4991]: I0929 09:51:38.428658 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541"} Sep 29 09:51:38 crc kubenswrapper[4991]: I0929 09:51:38.428688 4991 scope.go:117] "RemoveContainer" containerID="5e90e609f04cdc8c490f65e3c9ca90d77992faea1b259c2cf78d14fefc1dd179" Sep 29 09:51:51 crc kubenswrapper[4991]: I0929 09:51:51.898898 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv"] Sep 29 09:51:51 crc kubenswrapper[4991]: I0929 09:51:51.900830 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:51 crc kubenswrapper[4991]: I0929 09:51:51.906129 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 29 09:51:51 crc kubenswrapper[4991]: I0929 09:51:51.920900 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv"] Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.054164 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.054640 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m67z4\" (UniqueName: \"kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.054752 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.156272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m67z4\" (UniqueName: \"kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.156357 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.156484 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.157068 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.157207 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.177154 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m67z4\" (UniqueName: \"kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.226942 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:52 crc kubenswrapper[4991]: I0929 09:51:52.546661 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv"] Sep 29 09:51:53 crc kubenswrapper[4991]: I0929 09:51:53.541654 4991 generic.go:334] "Generic (PLEG): container finished" podID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerID="82ac8bea63c14aa1d0af4e5d568f2409e8889a3a09247f8889e5fce044e005d7" exitCode=0 Sep 29 09:51:53 crc kubenswrapper[4991]: I0929 09:51:53.541964 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" event={"ID":"8840b439-f65d-4c5f-8022-5261a9a9eaeb","Type":"ContainerDied","Data":"82ac8bea63c14aa1d0af4e5d568f2409e8889a3a09247f8889e5fce044e005d7"} Sep 29 09:51:53 crc kubenswrapper[4991]: I0929 09:51:53.541991 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" event={"ID":"8840b439-f65d-4c5f-8022-5261a9a9eaeb","Type":"ContainerStarted","Data":"9d8f0cd28d100ad0922e6bc3deb5396968df0c5934d9d1762cbcf6d8d64756fa"} Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.460012 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.461936 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.479115 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.509836 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.510010 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.510043 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j5w4\" (UniqueName: \"kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.612267 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.612668 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j5w4\" (UniqueName: \"kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.612739 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.612916 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.613179 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.632717 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j5w4\" (UniqueName: \"kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4\") pod \"certified-operators-8tl25\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:55 crc kubenswrapper[4991]: I0929 09:51:55.799007 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.259824 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:51:56 crc kubenswrapper[4991]: W0929 09:51:56.266187 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f884275_6f88_40ea_a0ed_78c298c1618c.slice/crio-38c2331a72be14f14d0c572e12d134dc03f167eb7f87c362d6f60da2571d3929 WatchSource:0}: Error finding container 38c2331a72be14f14d0c572e12d134dc03f167eb7f87c362d6f60da2571d3929: Status 404 returned error can't find the container with id 38c2331a72be14f14d0c572e12d134dc03f167eb7f87c362d6f60da2571d3929 Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.572722 4991 generic.go:334] "Generic (PLEG): container finished" podID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerID="615c90820af560de6cf95d1bd463f4d8fa1a4e0ef551bc303946d53fc5453b20" exitCode=0 Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.572827 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" event={"ID":"8840b439-f65d-4c5f-8022-5261a9a9eaeb","Type":"ContainerDied","Data":"615c90820af560de6cf95d1bd463f4d8fa1a4e0ef551bc303946d53fc5453b20"} Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.574622 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerID="ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6" exitCode=0 Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.574665 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerDied","Data":"ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6"} Sep 29 09:51:56 crc kubenswrapper[4991]: I0929 09:51:56.574713 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerStarted","Data":"38c2331a72be14f14d0c572e12d134dc03f167eb7f87c362d6f60da2571d3929"} Sep 29 09:51:57 crc kubenswrapper[4991]: I0929 09:51:57.588177 4991 generic.go:334] "Generic (PLEG): container finished" podID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerID="d6f1fcff013a99e4da9ed1bcf69e4cdec358b55cad41122f07e467ed7a561979" exitCode=0 Sep 29 09:51:57 crc kubenswrapper[4991]: I0929 09:51:57.588238 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" event={"ID":"8840b439-f65d-4c5f-8022-5261a9a9eaeb","Type":"ContainerDied","Data":"d6f1fcff013a99e4da9ed1bcf69e4cdec358b55cad41122f07e467ed7a561979"} Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.596716 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerID="c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2" exitCode=0 Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.596806 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerDied","Data":"c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2"} Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.912149 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.974992 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle\") pod \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.975064 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m67z4\" (UniqueName: \"kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4\") pod \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.975183 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util\") pod \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\" (UID: \"8840b439-f65d-4c5f-8022-5261a9a9eaeb\") " Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.975837 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle" (OuterVolumeSpecName: "bundle") pod "8840b439-f65d-4c5f-8022-5261a9a9eaeb" (UID: "8840b439-f65d-4c5f-8022-5261a9a9eaeb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.976887 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.983327 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4" (OuterVolumeSpecName: "kube-api-access-m67z4") pod "8840b439-f65d-4c5f-8022-5261a9a9eaeb" (UID: "8840b439-f65d-4c5f-8022-5261a9a9eaeb"). InnerVolumeSpecName "kube-api-access-m67z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:51:58 crc kubenswrapper[4991]: I0929 09:51:58.987528 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util" (OuterVolumeSpecName: "util") pod "8840b439-f65d-4c5f-8022-5261a9a9eaeb" (UID: "8840b439-f65d-4c5f-8022-5261a9a9eaeb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.078924 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m67z4\" (UniqueName: \"kubernetes.io/projected/8840b439-f65d-4c5f-8022-5261a9a9eaeb-kube-api-access-m67z4\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.078986 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8840b439-f65d-4c5f-8022-5261a9a9eaeb-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.604922 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerStarted","Data":"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787"} Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.607658 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" event={"ID":"8840b439-f65d-4c5f-8022-5261a9a9eaeb","Type":"ContainerDied","Data":"9d8f0cd28d100ad0922e6bc3deb5396968df0c5934d9d1762cbcf6d8d64756fa"} Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.607682 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8f0cd28d100ad0922e6bc3deb5396968df0c5934d9d1762cbcf6d8d64756fa" Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.607705 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv" Sep 29 09:51:59 crc kubenswrapper[4991]: I0929 09:51:59.952619 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tl25" podStartSLOduration=2.296852313 podStartE2EDuration="4.952597913s" podCreationTimestamp="2025-09-29 09:51:55 +0000 UTC" firstStartedPulling="2025-09-29 09:51:56.575667764 +0000 UTC m=+852.431595792" lastFinishedPulling="2025-09-29 09:51:59.231413374 +0000 UTC m=+855.087341392" observedRunningTime="2025-09-29 09:51:59.627677555 +0000 UTC m=+855.483605583" watchObservedRunningTime="2025-09-29 09:51:59.952597913 +0000 UTC m=+855.808525941" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.050997 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk"] Sep 29 09:52:04 crc kubenswrapper[4991]: E0929 09:52:04.051641 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="util" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.051659 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="util" Sep 29 09:52:04 crc kubenswrapper[4991]: E0929 09:52:04.051680 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="pull" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.051689 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="pull" Sep 29 09:52:04 crc kubenswrapper[4991]: E0929 09:52:04.051706 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="extract" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.051713 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="extract" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.051878 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8840b439-f65d-4c5f-8022-5261a9a9eaeb" containerName="extract" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.052506 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.054721 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.054739 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.054848 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vpql7" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.071737 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk"] Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.162311 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nh5j\" (UniqueName: \"kubernetes.io/projected/7f50af83-028a-45af-9b59-31b333747526-kube-api-access-2nh5j\") pod \"nmstate-operator-5d6f6cfd66-xmqmk\" (UID: \"7f50af83-028a-45af-9b59-31b333747526\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.264209 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nh5j\" (UniqueName: \"kubernetes.io/projected/7f50af83-028a-45af-9b59-31b333747526-kube-api-access-2nh5j\") pod \"nmstate-operator-5d6f6cfd66-xmqmk\" (UID: \"7f50af83-028a-45af-9b59-31b333747526\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.291241 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nh5j\" (UniqueName: \"kubernetes.io/projected/7f50af83-028a-45af-9b59-31b333747526-kube-api-access-2nh5j\") pod \"nmstate-operator-5d6f6cfd66-xmqmk\" (UID: \"7f50af83-028a-45af-9b59-31b333747526\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.420028 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" Sep 29 09:52:04 crc kubenswrapper[4991]: I0929 09:52:04.816681 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk"] Sep 29 09:52:05 crc kubenswrapper[4991]: I0929 09:52:05.648805 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" event={"ID":"7f50af83-028a-45af-9b59-31b333747526","Type":"ContainerStarted","Data":"f54b0a897ddf074920379f9431097b5c7c5cea2f613ffce57ec869dc19479f66"} Sep 29 09:52:05 crc kubenswrapper[4991]: I0929 09:52:05.799704 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:05 crc kubenswrapper[4991]: I0929 09:52:05.799750 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:05 crc kubenswrapper[4991]: I0929 09:52:05.846741 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:06 crc kubenswrapper[4991]: I0929 09:52:06.694354 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:07 crc kubenswrapper[4991]: I0929 09:52:07.451102 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:52:08 crc kubenswrapper[4991]: I0929 09:52:08.670517 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8tl25" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="registry-server" containerID="cri-o://98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787" gracePeriod=2 Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.679171 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.692649 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerID="98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787" exitCode=0 Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.692710 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerDied","Data":"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787"} Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.692794 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tl25" event={"ID":"2f884275-6f88-40ea-a0ed-78c298c1618c","Type":"ContainerDied","Data":"38c2331a72be14f14d0c572e12d134dc03f167eb7f87c362d6f60da2571d3929"} Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.692814 4991 scope.go:117] "RemoveContainer" containerID="98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.692995 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tl25" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.734413 4991 scope.go:117] "RemoveContainer" containerID="c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.772852 4991 scope.go:117] "RemoveContainer" containerID="ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.790488 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j5w4\" (UniqueName: \"kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4\") pod \"2f884275-6f88-40ea-a0ed-78c298c1618c\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.790618 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities\") pod \"2f884275-6f88-40ea-a0ed-78c298c1618c\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.790719 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content\") pod \"2f884275-6f88-40ea-a0ed-78c298c1618c\" (UID: \"2f884275-6f88-40ea-a0ed-78c298c1618c\") " Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.791346 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities" (OuterVolumeSpecName: "utilities") pod "2f884275-6f88-40ea-a0ed-78c298c1618c" (UID: "2f884275-6f88-40ea-a0ed-78c298c1618c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.797106 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4" (OuterVolumeSpecName: "kube-api-access-6j5w4") pod "2f884275-6f88-40ea-a0ed-78c298c1618c" (UID: "2f884275-6f88-40ea-a0ed-78c298c1618c"). InnerVolumeSpecName "kube-api-access-6j5w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.834738 4991 scope.go:117] "RemoveContainer" containerID="98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787" Sep 29 09:52:10 crc kubenswrapper[4991]: E0929 09:52:10.836016 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787\": container with ID starting with 98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787 not found: ID does not exist" containerID="98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.836065 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787"} err="failed to get container status \"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787\": rpc error: code = NotFound desc = could not find container \"98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787\": container with ID starting with 98c6e3df0506b1ad3faa31c32f731c765a2e093030a21b1f76e7d28884e04787 not found: ID does not exist" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.836097 4991 scope.go:117] "RemoveContainer" containerID="c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.836226 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f884275-6f88-40ea-a0ed-78c298c1618c" (UID: "2f884275-6f88-40ea-a0ed-78c298c1618c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:52:10 crc kubenswrapper[4991]: E0929 09:52:10.836800 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2\": container with ID starting with c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2 not found: ID does not exist" containerID="c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.836887 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2"} err="failed to get container status \"c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2\": rpc error: code = NotFound desc = could not find container \"c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2\": container with ID starting with c1bca7c0b468e5df17c612359ca0cfedbed9574e40599c11511e6d3bf2cabcd2 not found: ID does not exist" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.836921 4991 scope.go:117] "RemoveContainer" containerID="ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6" Sep 29 09:52:10 crc kubenswrapper[4991]: E0929 09:52:10.837561 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6\": container with ID starting with ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6 not found: ID does not exist" containerID="ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.837590 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6"} err="failed to get container status \"ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6\": rpc error: code = NotFound desc = could not find container \"ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6\": container with ID starting with ab91ead7f7a7286f77d1afaadb9a2427ae5174e0e9950fecffeb21279c5662d6 not found: ID does not exist" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.892638 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.892987 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f884275-6f88-40ea-a0ed-78c298c1618c-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:10 crc kubenswrapper[4991]: I0929 09:52:10.893002 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j5w4\" (UniqueName: \"kubernetes.io/projected/2f884275-6f88-40ea-a0ed-78c298c1618c-kube-api-access-6j5w4\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:11 crc kubenswrapper[4991]: I0929 09:52:11.013221 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:52:11 crc kubenswrapper[4991]: I0929 09:52:11.017652 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8tl25"] Sep 29 09:52:11 crc kubenswrapper[4991]: I0929 09:52:11.704337 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" event={"ID":"7f50af83-028a-45af-9b59-31b333747526","Type":"ContainerStarted","Data":"7680a9e231594d8d3464f49b8de178bf2f57aa5a250b33c177f9057c73a7a512"} Sep 29 09:52:11 crc kubenswrapper[4991]: I0929 09:52:11.733552 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-xmqmk" podStartSLOduration=1.860619943 podStartE2EDuration="7.733407502s" podCreationTimestamp="2025-09-29 09:52:04 +0000 UTC" firstStartedPulling="2025-09-29 09:52:04.826671597 +0000 UTC m=+860.682599625" lastFinishedPulling="2025-09-29 09:52:10.699459156 +0000 UTC m=+866.555387184" observedRunningTime="2025-09-29 09:52:11.723360887 +0000 UTC m=+867.579288965" watchObservedRunningTime="2025-09-29 09:52:11.733407502 +0000 UTC m=+867.589335540" Sep 29 09:52:12 crc kubenswrapper[4991]: I0929 09:52:12.935101 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" path="/var/lib/kubelet/pods/2f884275-6f88-40ea-a0ed-78c298c1618c/volumes" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.206509 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5"] Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.206842 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="extract-content" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.206861 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="extract-content" Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.206877 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="registry-server" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.206885 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="registry-server" Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.206894 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="extract-utilities" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.206903 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="extract-utilities" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.207113 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f884275-6f88-40ea-a0ed-78c298c1618c" containerName="registry-server" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.208039 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.210781 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dd75k" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.223682 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.231921 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.238912 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.247836 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.254845 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.284546 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-27kdc"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.291104 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.334440 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.334536 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9zl\" (UniqueName: \"kubernetes.io/projected/5128d12b-b704-4b3b-b9db-06abfa2dc3cc-kube-api-access-ck9zl\") pod \"nmstate-metrics-58fcddf996-f4kf5\" (UID: \"5128d12b-b704-4b3b-b9db-06abfa2dc3cc\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.334566 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjg2m\" (UniqueName: \"kubernetes.io/projected/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-kube-api-access-xjg2m\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.422039 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.423615 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.435570 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.435865 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436206 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7xckf" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436670 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-dbus-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436751 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9zl\" (UniqueName: \"kubernetes.io/projected/5128d12b-b704-4b3b-b9db-06abfa2dc3cc-kube-api-access-ck9zl\") pod \"nmstate-metrics-58fcddf996-f4kf5\" (UID: \"5128d12b-b704-4b3b-b9db-06abfa2dc3cc\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436780 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjg2m\" (UniqueName: \"kubernetes.io/projected/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-kube-api-access-xjg2m\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436818 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg4r\" (UniqueName: \"kubernetes.io/projected/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-kube-api-access-dbg4r\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436875 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-nmstate-lock\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.436924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-ovs-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.437044 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.437506 4991 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.437553 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair podName:5956fec2-ffc3-4fba-8ba0-bfea0bc31abd nodeName:}" failed. No retries permitted until 2025-09-29 09:52:13.93753711 +0000 UTC m=+869.793465128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair") pod "nmstate-webhook-6d689559c5-r8qrg" (UID: "5956fec2-ffc3-4fba-8ba0-bfea0bc31abd") : secret "openshift-nmstate-webhook" not found Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.450348 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.480682 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9zl\" (UniqueName: \"kubernetes.io/projected/5128d12b-b704-4b3b-b9db-06abfa2dc3cc-kube-api-access-ck9zl\") pod \"nmstate-metrics-58fcddf996-f4kf5\" (UID: \"5128d12b-b704-4b3b-b9db-06abfa2dc3cc\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.486432 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjg2m\" (UniqueName: \"kubernetes.io/projected/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-kube-api-access-xjg2m\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.528369 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539692 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbg4r\" (UniqueName: \"kubernetes.io/projected/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-kube-api-access-dbg4r\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539829 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539852 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlx5t\" (UniqueName: \"kubernetes.io/projected/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-kube-api-access-tlx5t\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539888 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-nmstate-lock\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.539932 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-ovs-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.540001 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-dbus-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.540461 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-dbus-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.540524 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-ovs-socket\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.540508 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-nmstate-lock\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.565187 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbg4r\" (UniqueName: \"kubernetes.io/projected/1084227f-9c42-4e89-aa18-bd2b4fcd9c93-kube-api-access-dbg4r\") pod \"nmstate-handler-27kdc\" (UID: \"1084227f-9c42-4e89-aa18-bd2b4fcd9c93\") " pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.613228 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.634174 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.635792 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.641765 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.641837 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.642009 4991 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.641974 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlx5t\" (UniqueName: \"kubernetes.io/projected/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-kube-api-access-tlx5t\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: E0929 09:52:13.642102 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert podName:0c4e65d5-1c12-4fdc-942b-7408fe9f1f74 nodeName:}" failed. No retries permitted until 2025-09-29 09:52:14.14207283 +0000 UTC m=+869.998000858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert") pod "nmstate-console-plugin-864bb6dfb5-j2v2f" (UID: "0c4e65d5-1c12-4fdc-942b-7408fe9f1f74") : secret "plugin-serving-cert" not found Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.643084 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.649259 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.686726 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlx5t\" (UniqueName: \"kubernetes.io/projected/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-kube-api-access-tlx5t\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.734577 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-27kdc" event={"ID":"1084227f-9c42-4e89-aa18-bd2b4fcd9c93","Type":"ContainerStarted","Data":"42eb3f9abc5e2e002f72bbbdfe4f48b1f7c5c91332bd5e208d9a71f8b66169f3"} Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.744929 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745006 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745044 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wzz\" (UniqueName: \"kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745093 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745130 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745175 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.745195 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848045 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848114 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848138 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848211 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848235 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848262 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9wzz\" (UniqueName: \"kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.848298 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.849468 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.850113 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.850142 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.850825 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.851696 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.853887 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.873085 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9wzz\" (UniqueName: \"kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz\") pod \"console-7fbf9bb75f-prqdm\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.930501 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5"] Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.950400 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.954452 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5956fec2-ffc3-4fba-8ba0-bfea0bc31abd-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-r8qrg\" (UID: \"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:13 crc kubenswrapper[4991]: I0929 09:52:13.990898 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.154071 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.158163 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.158278 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c4e65d5-1c12-4fdc-942b-7408fe9f1f74-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-j2v2f\" (UID: \"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.347131 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.547739 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.607648 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg"] Sep 29 09:52:14 crc kubenswrapper[4991]: W0929 09:52:14.615459 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5956fec2_ffc3_4fba_8ba0_bfea0bc31abd.slice/crio-1fb32f85ebeff6003ae5e765293c80be80dabc42f7d3a9dc27699c6cdeaf1bb0 WatchSource:0}: Error finding container 1fb32f85ebeff6003ae5e765293c80be80dabc42f7d3a9dc27699c6cdeaf1bb0: Status 404 returned error can't find the container with id 1fb32f85ebeff6003ae5e765293c80be80dabc42f7d3a9dc27699c6cdeaf1bb0 Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.748899 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" event={"ID":"5128d12b-b704-4b3b-b9db-06abfa2dc3cc","Type":"ContainerStarted","Data":"7842015d4a22d13facd27b8a8fb71cb583fe5d658b8eb0106a76edd39647d2e6"} Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.753178 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbf9bb75f-prqdm" event={"ID":"223e3abd-c0d7-4392-8965-368d980373a1","Type":"ContainerStarted","Data":"335c3407f330a50e15db1766a6f4635875018f45256a0ba7f6803b4ad7717ebc"} Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.754839 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" event={"ID":"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd","Type":"ContainerStarted","Data":"1fb32f85ebeff6003ae5e765293c80be80dabc42f7d3a9dc27699c6cdeaf1bb0"} Sep 29 09:52:14 crc kubenswrapper[4991]: I0929 09:52:14.762702 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f"] Sep 29 09:52:14 crc kubenswrapper[4991]: W0929 09:52:14.773644 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4e65d5_1c12_4fdc_942b_7408fe9f1f74.slice/crio-114d774bffdb0c2c4fa41db01eaa9ca4aa459bb79785fb299f51ea85059f1739 WatchSource:0}: Error finding container 114d774bffdb0c2c4fa41db01eaa9ca4aa459bb79785fb299f51ea85059f1739: Status 404 returned error can't find the container with id 114d774bffdb0c2c4fa41db01eaa9ca4aa459bb79785fb299f51ea85059f1739 Sep 29 09:52:15 crc kubenswrapper[4991]: I0929 09:52:15.763337 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbf9bb75f-prqdm" event={"ID":"223e3abd-c0d7-4392-8965-368d980373a1","Type":"ContainerStarted","Data":"345a2772695546b42fe56df8d080a7b5843090537a517a8e213fa1e7a65d7508"} Sep 29 09:52:15 crc kubenswrapper[4991]: I0929 09:52:15.766667 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" event={"ID":"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74","Type":"ContainerStarted","Data":"114d774bffdb0c2c4fa41db01eaa9ca4aa459bb79785fb299f51ea85059f1739"} Sep 29 09:52:15 crc kubenswrapper[4991]: I0929 09:52:15.782004 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fbf9bb75f-prqdm" podStartSLOduration=2.781987853 podStartE2EDuration="2.781987853s" podCreationTimestamp="2025-09-29 09:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:52:15.78149234 +0000 UTC m=+871.637420368" watchObservedRunningTime="2025-09-29 09:52:15.781987853 +0000 UTC m=+871.637915881" Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.775722 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" event={"ID":"5128d12b-b704-4b3b-b9db-06abfa2dc3cc","Type":"ContainerStarted","Data":"c33b28075ab18866a4a7be27773a0ade571e65618bfe51760b6302cc378d5461"} Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.778092 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" event={"ID":"5956fec2-ffc3-4fba-8ba0-bfea0bc31abd","Type":"ContainerStarted","Data":"fcb82440a111bfed0e84bcb05c3934becef7c07ebeb4f22e4119ecadcd43350a"} Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.778172 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.781892 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-27kdc" event={"ID":"1084227f-9c42-4e89-aa18-bd2b4fcd9c93","Type":"ContainerStarted","Data":"397873b7ce6da79db67adf8599e1d63c1119135ae1d3a5484fe3be01018f5157"} Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.782085 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.800528 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" podStartSLOduration=1.941837692 podStartE2EDuration="3.800505661s" podCreationTimestamp="2025-09-29 09:52:13 +0000 UTC" firstStartedPulling="2025-09-29 09:52:14.61822606 +0000 UTC m=+870.474154088" lastFinishedPulling="2025-09-29 09:52:16.476894029 +0000 UTC m=+872.332822057" observedRunningTime="2025-09-29 09:52:16.793349222 +0000 UTC m=+872.649277240" watchObservedRunningTime="2025-09-29 09:52:16.800505661 +0000 UTC m=+872.656433689" Sep 29 09:52:16 crc kubenswrapper[4991]: I0929 09:52:16.810682 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-27kdc" podStartSLOduration=1.048618881 podStartE2EDuration="3.810660929s" podCreationTimestamp="2025-09-29 09:52:13 +0000 UTC" firstStartedPulling="2025-09-29 09:52:13.673079258 +0000 UTC m=+869.529007286" lastFinishedPulling="2025-09-29 09:52:16.435121306 +0000 UTC m=+872.291049334" observedRunningTime="2025-09-29 09:52:16.80540789 +0000 UTC m=+872.661335918" watchObservedRunningTime="2025-09-29 09:52:16.810660929 +0000 UTC m=+872.666588957" Sep 29 09:52:18 crc kubenswrapper[4991]: I0929 09:52:18.795647 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" event={"ID":"0c4e65d5-1c12-4fdc-942b-7408fe9f1f74","Type":"ContainerStarted","Data":"c901284d6fccbbb83e077c46e7246a6966e399a4cfafbef1e833666e4fa1cc46"} Sep 29 09:52:18 crc kubenswrapper[4991]: I0929 09:52:18.813339 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-j2v2f" podStartSLOduration=2.521908057 podStartE2EDuration="5.813315199s" podCreationTimestamp="2025-09-29 09:52:13 +0000 UTC" firstStartedPulling="2025-09-29 09:52:14.775930283 +0000 UTC m=+870.631858321" lastFinishedPulling="2025-09-29 09:52:18.067337435 +0000 UTC m=+873.923265463" observedRunningTime="2025-09-29 09:52:18.80803997 +0000 UTC m=+874.663968008" watchObservedRunningTime="2025-09-29 09:52:18.813315199 +0000 UTC m=+874.669243227" Sep 29 09:52:21 crc kubenswrapper[4991]: I0929 09:52:21.828065 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" event={"ID":"5128d12b-b704-4b3b-b9db-06abfa2dc3cc","Type":"ContainerStarted","Data":"7f9030f30e15a1433c05689d23d04dda1577d7c3eade324232ca7a07e1af28e9"} Sep 29 09:52:21 crc kubenswrapper[4991]: I0929 09:52:21.853540 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58fcddf996-f4kf5" podStartSLOduration=2.106349676 podStartE2EDuration="8.853524659s" podCreationTimestamp="2025-09-29 09:52:13 +0000 UTC" firstStartedPulling="2025-09-29 09:52:13.931459499 +0000 UTC m=+869.787387527" lastFinishedPulling="2025-09-29 09:52:20.678634482 +0000 UTC m=+876.534562510" observedRunningTime="2025-09-29 09:52:21.848764803 +0000 UTC m=+877.704692831" watchObservedRunningTime="2025-09-29 09:52:21.853524659 +0000 UTC m=+877.709452687" Sep 29 09:52:23 crc kubenswrapper[4991]: I0929 09:52:23.657512 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-27kdc" Sep 29 09:52:23 crc kubenswrapper[4991]: I0929 09:52:23.991107 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:23 crc kubenswrapper[4991]: I0929 09:52:23.991170 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:23 crc kubenswrapper[4991]: I0929 09:52:23.996463 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:24 crc kubenswrapper[4991]: I0929 09:52:24.851964 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:52:24 crc kubenswrapper[4991]: I0929 09:52:24.903671 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:52:34 crc kubenswrapper[4991]: I0929 09:52:34.164327 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6d689559c5-r8qrg" Sep 29 09:52:50 crc kubenswrapper[4991]: I0929 09:52:50.019678 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-74fb6f4f4b-xpwzn" podUID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" containerName="console" containerID="cri-o://a9b1b6201d74d1f7bef6ee718a68321ef101bd3fe4cb394b63c6b5c01e627a2c" gracePeriod=15 Sep 29 09:52:51 crc kubenswrapper[4991]: I0929 09:52:51.060452 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74fb6f4f4b-xpwzn_c43f31f9-b468-4b09-ad32-ae144ae4cd8e/console/0.log" Sep 29 09:52:51 crc kubenswrapper[4991]: I0929 09:52:51.060663 4991 generic.go:334] "Generic (PLEG): container finished" podID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" containerID="a9b1b6201d74d1f7bef6ee718a68321ef101bd3fe4cb394b63c6b5c01e627a2c" exitCode=2 Sep 29 09:52:51 crc kubenswrapper[4991]: I0929 09:52:51.060690 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fb6f4f4b-xpwzn" event={"ID":"c43f31f9-b468-4b09-ad32-ae144ae4cd8e","Type":"ContainerDied","Data":"a9b1b6201d74d1f7bef6ee718a68321ef101bd3fe4cb394b63c6b5c01e627a2c"} Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.831884 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74fb6f4f4b-xpwzn_c43f31f9-b468-4b09-ad32-ae144ae4cd8e/console/0.log" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.832351 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.940432 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck"] Sep 29 09:52:52 crc kubenswrapper[4991]: E0929 09:52:52.940935 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" containerName="console" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.940975 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" containerName="console" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.941179 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" containerName="console" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.945296 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.947602 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck"] Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.948000 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971618 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jdd9\" (UniqueName: \"kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971668 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971711 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971745 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971825 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971906 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.971983 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert\") pod \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\" (UID: \"c43f31f9-b468-4b09-ad32-ae144ae4cd8e\") " Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.972512 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.972544 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca" (OuterVolumeSpecName: "service-ca") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.972556 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.972617 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config" (OuterVolumeSpecName: "console-config") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.978580 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.981214 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9" (OuterVolumeSpecName: "kube-api-access-6jdd9") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "kube-api-access-6jdd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:52:52 crc kubenswrapper[4991]: I0929 09:52:52.983733 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c43f31f9-b468-4b09-ad32-ae144ae4cd8e" (UID: "c43f31f9-b468-4b09-ad32-ae144ae4cd8e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.073754 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.073934 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074011 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6pl\" (UniqueName: \"kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074186 4991 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074209 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jdd9\" (UniqueName: \"kubernetes.io/projected/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-kube-api-access-6jdd9\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074228 4991 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074243 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074258 4991 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074273 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.074286 4991 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c43f31f9-b468-4b09-ad32-ae144ae4cd8e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.079397 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-74fb6f4f4b-xpwzn_c43f31f9-b468-4b09-ad32-ae144ae4cd8e/console/0.log" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.079465 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74fb6f4f4b-xpwzn" event={"ID":"c43f31f9-b468-4b09-ad32-ae144ae4cd8e","Type":"ContainerDied","Data":"06ca328f79243b453ddb13a769315778d8975696c452dcf4776cf450af05f26b"} Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.079505 4991 scope.go:117] "RemoveContainer" containerID="a9b1b6201d74d1f7bef6ee718a68321ef101bd3fe4cb394b63c6b5c01e627a2c" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.079574 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74fb6f4f4b-xpwzn" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.114019 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.122671 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-74fb6f4f4b-xpwzn"] Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.176509 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.176563 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b6pl\" (UniqueName: \"kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.176655 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.178723 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.178748 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.205247 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b6pl\" (UniqueName: \"kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.263065 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:53 crc kubenswrapper[4991]: I0929 09:52:53.671879 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck"] Sep 29 09:52:54 crc kubenswrapper[4991]: I0929 09:52:54.091592 4991 generic.go:334] "Generic (PLEG): container finished" podID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerID="cd4d372902fab62da666ed2dd79979d4c4590e2743be51ab86b25c9f68297284" exitCode=0 Sep 29 09:52:54 crc kubenswrapper[4991]: I0929 09:52:54.091767 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" event={"ID":"8faa5f43-6a26-4213-a873-d3e7c0794773","Type":"ContainerDied","Data":"cd4d372902fab62da666ed2dd79979d4c4590e2743be51ab86b25c9f68297284"} Sep 29 09:52:54 crc kubenswrapper[4991]: I0929 09:52:54.092018 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" event={"ID":"8faa5f43-6a26-4213-a873-d3e7c0794773","Type":"ContainerStarted","Data":"1a990b7ade6d4094dd47ddd5a040e7c17692da37333a05574883d8eff0167617"} Sep 29 09:52:54 crc kubenswrapper[4991]: I0929 09:52:54.094230 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 09:52:54 crc kubenswrapper[4991]: I0929 09:52:54.936843 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43f31f9-b468-4b09-ad32-ae144ae4cd8e" path="/var/lib/kubelet/pods/c43f31f9-b468-4b09-ad32-ae144ae4cd8e/volumes" Sep 29 09:52:56 crc kubenswrapper[4991]: I0929 09:52:56.111275 4991 generic.go:334] "Generic (PLEG): container finished" podID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerID="a6197146f6ba74365bd0e90da612087d25d39e6e91e1e795cf750f581e9f417a" exitCode=0 Sep 29 09:52:56 crc kubenswrapper[4991]: I0929 09:52:56.111398 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" event={"ID":"8faa5f43-6a26-4213-a873-d3e7c0794773","Type":"ContainerDied","Data":"a6197146f6ba74365bd0e90da612087d25d39e6e91e1e795cf750f581e9f417a"} Sep 29 09:52:57 crc kubenswrapper[4991]: I0929 09:52:57.121397 4991 generic.go:334] "Generic (PLEG): container finished" podID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerID="5686e1d440a24c54883c13921d859bb49aa9ec8323444c96b1b1c3cc0464ac02" exitCode=0 Sep 29 09:52:57 crc kubenswrapper[4991]: I0929 09:52:57.121489 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" event={"ID":"8faa5f43-6a26-4213-a873-d3e7c0794773","Type":"ContainerDied","Data":"5686e1d440a24c54883c13921d859bb49aa9ec8323444c96b1b1c3cc0464ac02"} Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.428734 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.579179 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util\") pod \"8faa5f43-6a26-4213-a873-d3e7c0794773\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.579293 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle\") pod \"8faa5f43-6a26-4213-a873-d3e7c0794773\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.579362 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b6pl\" (UniqueName: \"kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl\") pod \"8faa5f43-6a26-4213-a873-d3e7c0794773\" (UID: \"8faa5f43-6a26-4213-a873-d3e7c0794773\") " Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.581185 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle" (OuterVolumeSpecName: "bundle") pod "8faa5f43-6a26-4213-a873-d3e7c0794773" (UID: "8faa5f43-6a26-4213-a873-d3e7c0794773"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.586204 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl" (OuterVolumeSpecName: "kube-api-access-2b6pl") pod "8faa5f43-6a26-4213-a873-d3e7c0794773" (UID: "8faa5f43-6a26-4213-a873-d3e7c0794773"). InnerVolumeSpecName "kube-api-access-2b6pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.610192 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util" (OuterVolumeSpecName: "util") pod "8faa5f43-6a26-4213-a873-d3e7c0794773" (UID: "8faa5f43-6a26-4213-a873-d3e7c0794773"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.680826 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.680882 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8faa5f43-6a26-4213-a873-d3e7c0794773-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:58 crc kubenswrapper[4991]: I0929 09:52:58.680896 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b6pl\" (UniqueName: \"kubernetes.io/projected/8faa5f43-6a26-4213-a873-d3e7c0794773-kube-api-access-2b6pl\") on node \"crc\" DevicePath \"\"" Sep 29 09:52:59 crc kubenswrapper[4991]: I0929 09:52:59.136477 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" event={"ID":"8faa5f43-6a26-4213-a873-d3e7c0794773","Type":"ContainerDied","Data":"1a990b7ade6d4094dd47ddd5a040e7c17692da37333a05574883d8eff0167617"} Sep 29 09:52:59 crc kubenswrapper[4991]: I0929 09:52:59.136526 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck" Sep 29 09:52:59 crc kubenswrapper[4991]: I0929 09:52:59.136527 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a990b7ade6d4094dd47ddd5a040e7c17692da37333a05574883d8eff0167617" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.623638 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc"] Sep 29 09:53:19 crc kubenswrapper[4991]: E0929 09:53:19.624382 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="extract" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.624396 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="extract" Sep 29 09:53:19 crc kubenswrapper[4991]: E0929 09:53:19.624409 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="util" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.624417 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="util" Sep 29 09:53:19 crc kubenswrapper[4991]: E0929 09:53:19.624432 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="pull" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.624438 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="pull" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.624571 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8faa5f43-6a26-4213-a873-d3e7c0794773" containerName="extract" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.625105 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.628164 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.628177 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.628190 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.637405 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p8x7v" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.642565 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.647611 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc"] Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.742448 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-apiservice-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.742710 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-webhook-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.742795 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjj85\" (UniqueName: \"kubernetes.io/projected/6618cd07-587d-4f77-b392-080e4f6e5806-kube-api-access-mjj85\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.844612 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-apiservice-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.844669 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-webhook-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.844688 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjj85\" (UniqueName: \"kubernetes.io/projected/6618cd07-587d-4f77-b392-080e4f6e5806-kube-api-access-mjj85\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.853681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-apiservice-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.864603 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjj85\" (UniqueName: \"kubernetes.io/projected/6618cd07-587d-4f77-b392-080e4f6e5806-kube-api-access-mjj85\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.874800 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6618cd07-587d-4f77-b392-080e4f6e5806-webhook-cert\") pod \"metallb-operator-controller-manager-6d7567497f-s6bfc\" (UID: \"6618cd07-587d-4f77-b392-080e4f6e5806\") " pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:19 crc kubenswrapper[4991]: I0929 09:53:19.942298 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.011653 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c95c7875-765hk"] Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.012561 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.014259 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lfvzt" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.014699 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.015644 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.036663 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c95c7875-765hk"] Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.152431 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdmk\" (UniqueName: \"kubernetes.io/projected/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-kube-api-access-cwdmk\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.152682 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-apiservice-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.152726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-webhook-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.254227 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdmk\" (UniqueName: \"kubernetes.io/projected/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-kube-api-access-cwdmk\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.254299 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-apiservice-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.254363 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-webhook-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.277536 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-webhook-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.277545 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-apiservice-cert\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.283612 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdmk\" (UniqueName: \"kubernetes.io/projected/e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c-kube-api-access-cwdmk\") pod \"metallb-operator-webhook-server-79c95c7875-765hk\" (UID: \"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c\") " pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.330279 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.486685 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc"] Sep 29 09:53:20 crc kubenswrapper[4991]: I0929 09:53:20.829048 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79c95c7875-765hk"] Sep 29 09:53:21 crc kubenswrapper[4991]: I0929 09:53:21.297509 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" event={"ID":"6618cd07-587d-4f77-b392-080e4f6e5806","Type":"ContainerStarted","Data":"6caacaba915d823fe478107c6ef666931ac472fb3c7d2428f965182581550f70"} Sep 29 09:53:21 crc kubenswrapper[4991]: I0929 09:53:21.299234 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" event={"ID":"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c","Type":"ContainerStarted","Data":"b44670e622b422bc20789b9b941ebd311648e4caa7c74a09ce1d67f76c68ed78"} Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.352591 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" event={"ID":"e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c","Type":"ContainerStarted","Data":"dd1f25c3b6cdaf46fc1e03ee904ff031224f95afa70d61c59a4f84430acb104c"} Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.353217 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.354674 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" event={"ID":"6618cd07-587d-4f77-b392-080e4f6e5806","Type":"ContainerStarted","Data":"b64130a40a9a26b0e402f4f2687bb3dc0bb35569f230838b612bd871f3ca7e89"} Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.354834 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.373612 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" podStartSLOduration=2.7721247 podStartE2EDuration="10.373587605s" podCreationTimestamp="2025-09-29 09:53:19 +0000 UTC" firstStartedPulling="2025-09-29 09:53:20.83499018 +0000 UTC m=+936.690918208" lastFinishedPulling="2025-09-29 09:53:28.436453085 +0000 UTC m=+944.292381113" observedRunningTime="2025-09-29 09:53:29.370646407 +0000 UTC m=+945.226574455" watchObservedRunningTime="2025-09-29 09:53:29.373587605 +0000 UTC m=+945.229515653" Sep 29 09:53:29 crc kubenswrapper[4991]: I0929 09:53:29.397393 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" podStartSLOduration=2.472052979 podStartE2EDuration="10.397363983s" podCreationTimestamp="2025-09-29 09:53:19 +0000 UTC" firstStartedPulling="2025-09-29 09:53:20.497518281 +0000 UTC m=+936.353446309" lastFinishedPulling="2025-09-29 09:53:28.422829285 +0000 UTC m=+944.278757313" observedRunningTime="2025-09-29 09:53:29.392868214 +0000 UTC m=+945.248796262" watchObservedRunningTime="2025-09-29 09:53:29.397363983 +0000 UTC m=+945.253292011" Sep 29 09:53:40 crc kubenswrapper[4991]: I0929 09:53:40.334203 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79c95c7875-765hk" Sep 29 09:53:59 crc kubenswrapper[4991]: I0929 09:53:59.945175 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d7567497f-s6bfc" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.733306 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qb2cb"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.736218 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.739327 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.739364 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.740068 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9ftrc" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.743350 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.744367 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.747252 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.758862 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.835550 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-srk99"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.836989 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-srk99" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.844455 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5d688f5ffc-dhq9n"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.845088 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.845162 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.845398 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8vlxw" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.845461 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.846530 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.847842 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855154 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-sockets\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855225 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-conf\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855350 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv67d\" (UniqueName: \"kubernetes.io/projected/01dcafc4-bfc8-44ea-af23-a5eedf41c331-kube-api-access-nv67d\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855451 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855493 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhgn\" (UniqueName: \"kubernetes.io/projected/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-kube-api-access-6rhgn\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855523 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855577 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-cert\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855804 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-reloader\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.855884 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-startup\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.856105 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-dhq9n"] Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.957920 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26z7\" (UniqueName: \"kubernetes.io/projected/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-kube-api-access-b26z7\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.957999 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-startup\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958072 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-sockets\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958123 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-conf\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958148 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-cert\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958183 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv67d\" (UniqueName: \"kubernetes.io/projected/01dcafc4-bfc8-44ea-af23-a5eedf41c331-kube-api-access-nv67d\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958212 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-metrics-certs\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958234 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metallb-excludel2\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958269 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958298 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhgn\" (UniqueName: \"kubernetes.io/projected/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-kube-api-access-6rhgn\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958322 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958354 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-cert\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958398 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metrics-certs\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958417 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958452 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqh7h\" (UniqueName: \"kubernetes.io/projected/595fa3a7-5bff-454b-85d1-458f67728b2c-kube-api-access-tqh7h\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958504 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-reloader\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958706 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-conf\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958772 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.958897 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-reloader\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: E0929 09:54:00.958904 4991 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Sep 29 09:54:00 crc kubenswrapper[4991]: E0929 09:54:00.959004 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs podName:01dcafc4-bfc8-44ea-af23-a5eedf41c331 nodeName:}" failed. No retries permitted until 2025-09-29 09:54:01.458982357 +0000 UTC m=+977.314910475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs") pod "frr-k8s-qb2cb" (UID: "01dcafc4-bfc8-44ea-af23-a5eedf41c331") : secret "frr-k8s-certs-secret" not found Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.959082 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-startup\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.959280 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/01dcafc4-bfc8-44ea-af23-a5eedf41c331-frr-sockets\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.968783 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-cert\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.976149 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv67d\" (UniqueName: \"kubernetes.io/projected/01dcafc4-bfc8-44ea-af23-a5eedf41c331-kube-api-access-nv67d\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:00 crc kubenswrapper[4991]: I0929 09:54:00.988702 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhgn\" (UniqueName: \"kubernetes.io/projected/adccb4f9-5bc4-48fb-830a-32c6c889a9a7-kube-api-access-6rhgn\") pod \"frr-k8s-webhook-server-5478bdb765-c82r7\" (UID: \"adccb4f9-5bc4-48fb-830a-32c6c889a9a7\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.060169 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-metrics-certs\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.060213 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metallb-excludel2\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.060311 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metrics-certs\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.060333 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.060364 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqh7h\" (UniqueName: \"kubernetes.io/projected/595fa3a7-5bff-454b-85d1-458f67728b2c-kube-api-access-tqh7h\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: E0929 09:54:01.060540 4991 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 29 09:54:01 crc kubenswrapper[4991]: E0929 09:54:01.060595 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist podName:c23ddd1f-4ebb-4f74-bb77-b6882ea82681 nodeName:}" failed. No retries permitted until 2025-09-29 09:54:01.560578425 +0000 UTC m=+977.416506453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist") pod "speaker-srk99" (UID: "c23ddd1f-4ebb-4f74-bb77-b6882ea82681") : secret "metallb-memberlist" not found Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.061325 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26z7\" (UniqueName: \"kubernetes.io/projected/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-kube-api-access-b26z7\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.061547 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-cert\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.062461 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metallb-excludel2\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.067690 4991 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.067907 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-metrics-certs\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.068083 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-metrics-certs\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.069299 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.076448 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/595fa3a7-5bff-454b-85d1-458f67728b2c-cert\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.085730 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26z7\" (UniqueName: \"kubernetes.io/projected/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-kube-api-access-b26z7\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.096750 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqh7h\" (UniqueName: \"kubernetes.io/projected/595fa3a7-5bff-454b-85d1-458f67728b2c-kube-api-access-tqh7h\") pod \"controller-5d688f5ffc-dhq9n\" (UID: \"595fa3a7-5bff-454b-85d1-458f67728b2c\") " pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.174415 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.467646 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.476446 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01dcafc4-bfc8-44ea-af23-a5eedf41c331-metrics-certs\") pod \"frr-k8s-qb2cb\" (UID: \"01dcafc4-bfc8-44ea-af23-a5eedf41c331\") " pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.528981 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7"] Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.569270 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:01 crc kubenswrapper[4991]: E0929 09:54:01.569472 4991 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 29 09:54:01 crc kubenswrapper[4991]: E0929 09:54:01.569554 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist podName:c23ddd1f-4ebb-4f74-bb77-b6882ea82681 nodeName:}" failed. No retries permitted until 2025-09-29 09:54:02.569537851 +0000 UTC m=+978.425465879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist") pod "speaker-srk99" (UID: "c23ddd1f-4ebb-4f74-bb77-b6882ea82681") : secret "metallb-memberlist" not found Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.586906 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" event={"ID":"adccb4f9-5bc4-48fb-830a-32c6c889a9a7","Type":"ContainerStarted","Data":"752220b29bbdfc0b61a0518a2b21eb6717667669320eedcf7f9e9c13dad7a39d"} Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.655396 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:01 crc kubenswrapper[4991]: I0929 09:54:01.721326 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-dhq9n"] Sep 29 09:54:01 crc kubenswrapper[4991]: W0929 09:54:01.727273 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595fa3a7_5bff_454b_85d1_458f67728b2c.slice/crio-572b691478d2a2a6aa528a0f84d13b46d6f75670cdd890636d1f904d45df13cb WatchSource:0}: Error finding container 572b691478d2a2a6aa528a0f84d13b46d6f75670cdd890636d1f904d45df13cb: Status 404 returned error can't find the container with id 572b691478d2a2a6aa528a0f84d13b46d6f75670cdd890636d1f904d45df13cb Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.585026 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.602466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c23ddd1f-4ebb-4f74-bb77-b6882ea82681-memberlist\") pod \"speaker-srk99\" (UID: \"c23ddd1f-4ebb-4f74-bb77-b6882ea82681\") " pod="metallb-system/speaker-srk99" Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.624135 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"e16bf92a57eb47ba511abc556140497b3f00447c1ca257ef459805846394c502"} Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.625794 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-dhq9n" event={"ID":"595fa3a7-5bff-454b-85d1-458f67728b2c","Type":"ContainerStarted","Data":"add605f315b71df44a70cfa5b099ef5608f12b204427428d7e0d971c3007464a"} Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.625819 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-dhq9n" event={"ID":"595fa3a7-5bff-454b-85d1-458f67728b2c","Type":"ContainerStarted","Data":"a76e3be64865aba5adf79be39317e744ef9d3a376e557e2faefcb199654c01a0"} Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.625828 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-dhq9n" event={"ID":"595fa3a7-5bff-454b-85d1-458f67728b2c","Type":"ContainerStarted","Data":"572b691478d2a2a6aa528a0f84d13b46d6f75670cdd890636d1f904d45df13cb"} Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.625932 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.651743 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5d688f5ffc-dhq9n" podStartSLOduration=2.651720822 podStartE2EDuration="2.651720822s" podCreationTimestamp="2025-09-29 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:54:02.645880909 +0000 UTC m=+978.501808937" watchObservedRunningTime="2025-09-29 09:54:02.651720822 +0000 UTC m=+978.507648850" Sep 29 09:54:02 crc kubenswrapper[4991]: I0929 09:54:02.658395 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-srk99" Sep 29 09:54:03 crc kubenswrapper[4991]: I0929 09:54:03.649352 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-srk99" event={"ID":"c23ddd1f-4ebb-4f74-bb77-b6882ea82681","Type":"ContainerStarted","Data":"cb113a6ab0937f453cfe60b23dca99b6568c52082f548d624d4086f29bf7efe6"} Sep 29 09:54:03 crc kubenswrapper[4991]: I0929 09:54:03.649707 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-srk99" event={"ID":"c23ddd1f-4ebb-4f74-bb77-b6882ea82681","Type":"ContainerStarted","Data":"836ef925705942e1c60be4f7bdad69a53686f8033d21ff6573cbea1900723441"} Sep 29 09:54:03 crc kubenswrapper[4991]: I0929 09:54:03.649722 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-srk99" event={"ID":"c23ddd1f-4ebb-4f74-bb77-b6882ea82681","Type":"ContainerStarted","Data":"ed7a9eb6ce6dbc7f85448ee38e8d460638dad419ac5db4b0bbc8dd3eb0e788da"} Sep 29 09:54:03 crc kubenswrapper[4991]: I0929 09:54:03.650290 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-srk99" Sep 29 09:54:03 crc kubenswrapper[4991]: I0929 09:54:03.687054 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-srk99" podStartSLOduration=3.687024942 podStartE2EDuration="3.687024942s" podCreationTimestamp="2025-09-29 09:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:54:03.679545445 +0000 UTC m=+979.535473483" watchObservedRunningTime="2025-09-29 09:54:03.687024942 +0000 UTC m=+979.542952970" Sep 29 09:54:07 crc kubenswrapper[4991]: I0929 09:54:07.946398 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:54:07 crc kubenswrapper[4991]: I0929 09:54:07.947000 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:54:10 crc kubenswrapper[4991]: I0929 09:54:10.706901 4991 generic.go:334] "Generic (PLEG): container finished" podID="01dcafc4-bfc8-44ea-af23-a5eedf41c331" containerID="4c3b66096a35ce8b0843db377f7155cd936fe846b8c3185c02265acbd854ec88" exitCode=0 Sep 29 09:54:10 crc kubenswrapper[4991]: I0929 09:54:10.706973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerDied","Data":"4c3b66096a35ce8b0843db377f7155cd936fe846b8c3185c02265acbd854ec88"} Sep 29 09:54:10 crc kubenswrapper[4991]: I0929 09:54:10.709462 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" event={"ID":"adccb4f9-5bc4-48fb-830a-32c6c889a9a7","Type":"ContainerStarted","Data":"a49eca6319bb7665e3ed416e1713e219c14c07eb8ed4c6e7931580e54e87e34b"} Sep 29 09:54:10 crc kubenswrapper[4991]: I0929 09:54:10.710051 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:10 crc kubenswrapper[4991]: I0929 09:54:10.744781 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" podStartSLOduration=2.495390367 podStartE2EDuration="10.744765184s" podCreationTimestamp="2025-09-29 09:54:00 +0000 UTC" firstStartedPulling="2025-09-29 09:54:01.536774331 +0000 UTC m=+977.392702359" lastFinishedPulling="2025-09-29 09:54:09.786149138 +0000 UTC m=+985.642077176" observedRunningTime="2025-09-29 09:54:10.743960553 +0000 UTC m=+986.599888581" watchObservedRunningTime="2025-09-29 09:54:10.744765184 +0000 UTC m=+986.600693212" Sep 29 09:54:11 crc kubenswrapper[4991]: I0929 09:54:11.179926 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5d688f5ffc-dhq9n" Sep 29 09:54:11 crc kubenswrapper[4991]: I0929 09:54:11.719752 4991 generic.go:334] "Generic (PLEG): container finished" podID="01dcafc4-bfc8-44ea-af23-a5eedf41c331" containerID="7a13e9bad702e73f0ed3171ad8b801f86b128292e2d9ec8ef408c2e297c853ca" exitCode=0 Sep 29 09:54:11 crc kubenswrapper[4991]: I0929 09:54:11.719805 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerDied","Data":"7a13e9bad702e73f0ed3171ad8b801f86b128292e2d9ec8ef408c2e297c853ca"} Sep 29 09:54:12 crc kubenswrapper[4991]: I0929 09:54:12.662741 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-srk99" Sep 29 09:54:12 crc kubenswrapper[4991]: I0929 09:54:12.729305 4991 generic.go:334] "Generic (PLEG): container finished" podID="01dcafc4-bfc8-44ea-af23-a5eedf41c331" containerID="bcc700d6570a997dbad54e1448b1327505a23f9f1e7b1bfd6bbbf7f512e73fd6" exitCode=0 Sep 29 09:54:12 crc kubenswrapper[4991]: I0929 09:54:12.729372 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerDied","Data":"bcc700d6570a997dbad54e1448b1327505a23f9f1e7b1bfd6bbbf7f512e73fd6"} Sep 29 09:54:13 crc kubenswrapper[4991]: I0929 09:54:13.741011 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"d5f4fa69fad2c06100a75dadec5028fec93cd058b34138a0231cd199971cd8fa"} Sep 29 09:54:13 crc kubenswrapper[4991]: I0929 09:54:13.742031 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"6f6f71a1001c028a37c7de3d52a2ebd37f8a62a313b464648cecd273d397da27"} Sep 29 09:54:15 crc kubenswrapper[4991]: I0929 09:54:15.759262 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"7c890f3b74cc638deaf94e5d9f2456384377a60297121347a947644fb80ab8e1"} Sep 29 09:54:16 crc kubenswrapper[4991]: I0929 09:54:16.771570 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"d188223552f3e0197a24217e03941a65ed4b27eecdb6ff51494abc62daf9425c"} Sep 29 09:54:16 crc kubenswrapper[4991]: I0929 09:54:16.772205 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"44a6c1b23632d969ee7b7b4d642e1fffa0ca5b4ac57fa3bfe25471229ae3345b"} Sep 29 09:54:17 crc kubenswrapper[4991]: I0929 09:54:17.782274 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qb2cb" event={"ID":"01dcafc4-bfc8-44ea-af23-a5eedf41c331","Type":"ContainerStarted","Data":"0350e0ad1da63bbad7d3b6d8b7777df45fad956e32c030f133fb31f02c2d153a"} Sep 29 09:54:17 crc kubenswrapper[4991]: I0929 09:54:17.783264 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:17 crc kubenswrapper[4991]: I0929 09:54:17.807008 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qb2cb" podStartSLOduration=9.876755688 podStartE2EDuration="17.806990164s" podCreationTimestamp="2025-09-29 09:54:00 +0000 UTC" firstStartedPulling="2025-09-29 09:54:01.87638002 +0000 UTC m=+977.732308048" lastFinishedPulling="2025-09-29 09:54:09.806614496 +0000 UTC m=+985.662542524" observedRunningTime="2025-09-29 09:54:17.805464054 +0000 UTC m=+993.661392092" watchObservedRunningTime="2025-09-29 09:54:17.806990164 +0000 UTC m=+993.662918192" Sep 29 09:54:20 crc kubenswrapper[4991]: I0929 09:54:20.995202 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9vwmv"] Sep 29 09:54:20 crc kubenswrapper[4991]: I0929 09:54:20.996552 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:20 crc kubenswrapper[4991]: I0929 09:54:20.998471 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lrd9n" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.000243 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.000481 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.004722 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vwmv"] Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.085641 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-c82r7" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.113352 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbjs\" (UniqueName: \"kubernetes.io/projected/509dcced-c7e6-454f-a198-aa95e335527b-kube-api-access-fmbjs\") pod \"openstack-operator-index-9vwmv\" (UID: \"509dcced-c7e6-454f-a198-aa95e335527b\") " pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.215249 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmbjs\" (UniqueName: \"kubernetes.io/projected/509dcced-c7e6-454f-a198-aa95e335527b-kube-api-access-fmbjs\") pod \"openstack-operator-index-9vwmv\" (UID: \"509dcced-c7e6-454f-a198-aa95e335527b\") " pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.240219 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmbjs\" (UniqueName: \"kubernetes.io/projected/509dcced-c7e6-454f-a198-aa95e335527b-kube-api-access-fmbjs\") pod \"openstack-operator-index-9vwmv\" (UID: \"509dcced-c7e6-454f-a198-aa95e335527b\") " pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.318214 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.655755 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.695243 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.746569 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9vwmv"] Sep 29 09:54:21 crc kubenswrapper[4991]: W0929 09:54:21.747466 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod509dcced_c7e6_454f_a198_aa95e335527b.slice/crio-ddb546bbe940ee6d1ce8b010343920a383956e3e10e13269231d564a44ec6a4b WatchSource:0}: Error finding container ddb546bbe940ee6d1ce8b010343920a383956e3e10e13269231d564a44ec6a4b: Status 404 returned error can't find the container with id ddb546bbe940ee6d1ce8b010343920a383956e3e10e13269231d564a44ec6a4b Sep 29 09:54:21 crc kubenswrapper[4991]: I0929 09:54:21.812496 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vwmv" event={"ID":"509dcced-c7e6-454f-a198-aa95e335527b","Type":"ContainerStarted","Data":"ddb546bbe940ee6d1ce8b010343920a383956e3e10e13269231d564a44ec6a4b"} Sep 29 09:54:27 crc kubenswrapper[4991]: I0929 09:54:27.858882 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9vwmv" event={"ID":"509dcced-c7e6-454f-a198-aa95e335527b","Type":"ContainerStarted","Data":"f6c1250dc810fe4c3dfbc80b2ba08bf4ab11b641a5ca806f39d62abb9779989c"} Sep 29 09:54:27 crc kubenswrapper[4991]: I0929 09:54:27.880018 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9vwmv" podStartSLOduration=2.116103131 podStartE2EDuration="7.879994344s" podCreationTimestamp="2025-09-29 09:54:20 +0000 UTC" firstStartedPulling="2025-09-29 09:54:21.749271357 +0000 UTC m=+997.605199385" lastFinishedPulling="2025-09-29 09:54:27.51316257 +0000 UTC m=+1003.369090598" observedRunningTime="2025-09-29 09:54:27.873547044 +0000 UTC m=+1003.729475072" watchObservedRunningTime="2025-09-29 09:54:27.879994344 +0000 UTC m=+1003.735922372" Sep 29 09:54:31 crc kubenswrapper[4991]: I0929 09:54:31.319175 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:31 crc kubenswrapper[4991]: I0929 09:54:31.319497 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:31 crc kubenswrapper[4991]: I0929 09:54:31.352832 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:31 crc kubenswrapper[4991]: I0929 09:54:31.658742 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qb2cb" Sep 29 09:54:37 crc kubenswrapper[4991]: I0929 09:54:37.946945 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:54:37 crc kubenswrapper[4991]: I0929 09:54:37.947337 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:54:41 crc kubenswrapper[4991]: I0929 09:54:41.357781 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9vwmv" Sep 29 09:54:52 crc kubenswrapper[4991]: I0929 09:54:52.901131 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb"] Sep 29 09:54:52 crc kubenswrapper[4991]: I0929 09:54:52.903838 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:52 crc kubenswrapper[4991]: I0929 09:54:52.906385 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-42ckj" Sep 29 09:54:52 crc kubenswrapper[4991]: I0929 09:54:52.913943 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb"] Sep 29 09:54:52 crc kubenswrapper[4991]: I0929 09:54:52.999748 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.000207 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgh9\" (UniqueName: \"kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.000373 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.101537 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.101644 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.101804 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgh9\" (UniqueName: \"kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.102183 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.102587 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.131145 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgh9\" (UniqueName: \"kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9\") pod \"6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.225883 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:53 crc kubenswrapper[4991]: I0929 09:54:53.652118 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb"] Sep 29 09:54:54 crc kubenswrapper[4991]: I0929 09:54:54.068982 4991 generic.go:334] "Generic (PLEG): container finished" podID="164835c5-ecea-452d-9652-f252751a6b25" containerID="c4a45c93a08d7609c2615c03da9b5261b0d9b7a4cd8dfee995d010846d8f7176" exitCode=0 Sep 29 09:54:54 crc kubenswrapper[4991]: I0929 09:54:54.069091 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" event={"ID":"164835c5-ecea-452d-9652-f252751a6b25","Type":"ContainerDied","Data":"c4a45c93a08d7609c2615c03da9b5261b0d9b7a4cd8dfee995d010846d8f7176"} Sep 29 09:54:54 crc kubenswrapper[4991]: I0929 09:54:54.069263 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" event={"ID":"164835c5-ecea-452d-9652-f252751a6b25","Type":"ContainerStarted","Data":"7ef7aa4f9504b5e67298f059e7b74246f6f84a361a9dfc705f70069d481cb506"} Sep 29 09:54:55 crc kubenswrapper[4991]: I0929 09:54:55.078814 4991 generic.go:334] "Generic (PLEG): container finished" podID="164835c5-ecea-452d-9652-f252751a6b25" containerID="63d92fbf4d566e5997c5e000c69ba63accff86ebecb6df25d858fee34a2b52d6" exitCode=0 Sep 29 09:54:55 crc kubenswrapper[4991]: I0929 09:54:55.079119 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" event={"ID":"164835c5-ecea-452d-9652-f252751a6b25","Type":"ContainerDied","Data":"63d92fbf4d566e5997c5e000c69ba63accff86ebecb6df25d858fee34a2b52d6"} Sep 29 09:54:56 crc kubenswrapper[4991]: I0929 09:54:56.118427 4991 generic.go:334] "Generic (PLEG): container finished" podID="164835c5-ecea-452d-9652-f252751a6b25" containerID="506039c32705403d07d0c762d55d75612ea9bbd643d64dfd04f63bce88622384" exitCode=0 Sep 29 09:54:56 crc kubenswrapper[4991]: I0929 09:54:56.118610 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" event={"ID":"164835c5-ecea-452d-9652-f252751a6b25","Type":"ContainerDied","Data":"506039c32705403d07d0c762d55d75612ea9bbd643d64dfd04f63bce88622384"} Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.439627 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.580109 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util\") pod \"164835c5-ecea-452d-9652-f252751a6b25\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.580198 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle\") pod \"164835c5-ecea-452d-9652-f252751a6b25\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.580465 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zgh9\" (UniqueName: \"kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9\") pod \"164835c5-ecea-452d-9652-f252751a6b25\" (UID: \"164835c5-ecea-452d-9652-f252751a6b25\") " Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.581336 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle" (OuterVolumeSpecName: "bundle") pod "164835c5-ecea-452d-9652-f252751a6b25" (UID: "164835c5-ecea-452d-9652-f252751a6b25"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.582455 4991 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.592048 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9" (OuterVolumeSpecName: "kube-api-access-8zgh9") pod "164835c5-ecea-452d-9652-f252751a6b25" (UID: "164835c5-ecea-452d-9652-f252751a6b25"). InnerVolumeSpecName "kube-api-access-8zgh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.594823 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util" (OuterVolumeSpecName: "util") pod "164835c5-ecea-452d-9652-f252751a6b25" (UID: "164835c5-ecea-452d-9652-f252751a6b25"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.684254 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zgh9\" (UniqueName: \"kubernetes.io/projected/164835c5-ecea-452d-9652-f252751a6b25-kube-api-access-8zgh9\") on node \"crc\" DevicePath \"\"" Sep 29 09:54:57 crc kubenswrapper[4991]: I0929 09:54:57.684302 4991 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/164835c5-ecea-452d-9652-f252751a6b25-util\") on node \"crc\" DevicePath \"\"" Sep 29 09:54:58 crc kubenswrapper[4991]: I0929 09:54:58.140272 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" event={"ID":"164835c5-ecea-452d-9652-f252751a6b25","Type":"ContainerDied","Data":"7ef7aa4f9504b5e67298f059e7b74246f6f84a361a9dfc705f70069d481cb506"} Sep 29 09:54:58 crc kubenswrapper[4991]: I0929 09:54:58.140326 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ef7aa4f9504b5e67298f059e7b74246f6f84a361a9dfc705f70069d481cb506" Sep 29 09:54:58 crc kubenswrapper[4991]: I0929 09:54:58.140343 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.572375 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt"] Sep 29 09:55:05 crc kubenswrapper[4991]: E0929 09:55:05.573220 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="pull" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.573234 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="pull" Sep 29 09:55:05 crc kubenswrapper[4991]: E0929 09:55:05.573249 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="util" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.573255 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="util" Sep 29 09:55:05 crc kubenswrapper[4991]: E0929 09:55:05.573278 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="extract" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.573285 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="extract" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.573409 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="164835c5-ecea-452d-9652-f252751a6b25" containerName="extract" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.574175 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.581064 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-d22sm" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.597331 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt"] Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.719267 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4txc\" (UniqueName: \"kubernetes.io/projected/28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d-kube-api-access-h4txc\") pod \"openstack-operator-controller-operator-78bbdfbc57-k9wlt\" (UID: \"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d\") " pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.820793 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4txc\" (UniqueName: \"kubernetes.io/projected/28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d-kube-api-access-h4txc\") pod \"openstack-operator-controller-operator-78bbdfbc57-k9wlt\" (UID: \"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d\") " pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.840489 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4txc\" (UniqueName: \"kubernetes.io/projected/28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d-kube-api-access-h4txc\") pod \"openstack-operator-controller-operator-78bbdfbc57-k9wlt\" (UID: \"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d\") " pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:05 crc kubenswrapper[4991]: I0929 09:55:05.897596 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:06 crc kubenswrapper[4991]: I0929 09:55:06.366750 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt"] Sep 29 09:55:06 crc kubenswrapper[4991]: W0929 09:55:06.379706 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ffec7f_9b5d_45ce_9ef4_8a45a9dd0b5d.slice/crio-85ea8bab3deeecfaae66aaa6cee26bb0b0dfeb7946d235c2f6f8fad6397d6c40 WatchSource:0}: Error finding container 85ea8bab3deeecfaae66aaa6cee26bb0b0dfeb7946d235c2f6f8fad6397d6c40: Status 404 returned error can't find the container with id 85ea8bab3deeecfaae66aaa6cee26bb0b0dfeb7946d235c2f6f8fad6397d6c40 Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.216111 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" event={"ID":"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d","Type":"ContainerStarted","Data":"85ea8bab3deeecfaae66aaa6cee26bb0b0dfeb7946d235c2f6f8fad6397d6c40"} Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.951525 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.951597 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.951642 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.954609 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:55:07 crc kubenswrapper[4991]: I0929 09:55:07.954691 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541" gracePeriod=600 Sep 29 09:55:08 crc kubenswrapper[4991]: I0929 09:55:08.237066 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541" exitCode=0 Sep 29 09:55:08 crc kubenswrapper[4991]: I0929 09:55:08.237105 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541"} Sep 29 09:55:08 crc kubenswrapper[4991]: I0929 09:55:08.237159 4991 scope.go:117] "RemoveContainer" containerID="a15f13cc985510f6f7295356c1768851fe16cd07a673836c646f2a0f6e9e542c" Sep 29 09:55:11 crc kubenswrapper[4991]: I0929 09:55:11.261085 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" event={"ID":"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d","Type":"ContainerStarted","Data":"94dfda50d2cd9b7aedf43f6fa9a748a0aee6c8b636a865119c4aa6bbdc059231"} Sep 29 09:55:11 crc kubenswrapper[4991]: I0929 09:55:11.263862 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a"} Sep 29 09:55:14 crc kubenswrapper[4991]: I0929 09:55:14.287007 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" event={"ID":"28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d","Type":"ContainerStarted","Data":"a4acb9049b0db9ff5e3755c5410317ad2ff63b5e9cb44d2f05d4d1c3444a2253"} Sep 29 09:55:14 crc kubenswrapper[4991]: I0929 09:55:14.287870 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:14 crc kubenswrapper[4991]: I0929 09:55:14.322688 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" podStartSLOduration=2.589382904 podStartE2EDuration="9.322669445s" podCreationTimestamp="2025-09-29 09:55:05 +0000 UTC" firstStartedPulling="2025-09-29 09:55:06.386643357 +0000 UTC m=+1042.242571385" lastFinishedPulling="2025-09-29 09:55:13.119929898 +0000 UTC m=+1048.975857926" observedRunningTime="2025-09-29 09:55:14.312383135 +0000 UTC m=+1050.168311173" watchObservedRunningTime="2025-09-29 09:55:14.322669445 +0000 UTC m=+1050.178597473" Sep 29 09:55:15 crc kubenswrapper[4991]: I0929 09:55:15.298350 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-78bbdfbc57-k9wlt" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.850201 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.852637 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.855381 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdw7\" (UniqueName: \"kubernetes.io/projected/64c94e01-ecee-47b4-ab4b-182085a9dce5-kube-api-access-pkdw7\") pod \"cinder-operator-controller-manager-748c574d75-bjlxb\" (UID: \"64c94e01-ecee-47b4-ab4b-182085a9dce5\") " pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.869317 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7jdrc" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.869517 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.871131 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.879319 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wrb9c" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.894860 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.896180 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.898673 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hdfvn" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.924795 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.939799 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.957234 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdw7\" (UniqueName: \"kubernetes.io/projected/64c94e01-ecee-47b4-ab4b-182085a9dce5-kube-api-access-pkdw7\") pod \"cinder-operator-controller-manager-748c574d75-bjlxb\" (UID: \"64c94e01-ecee-47b4-ab4b-182085a9dce5\") " pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.964018 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.965296 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.967438 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j9dbw" Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.975800 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9"] Sep 29 09:55:35 crc kubenswrapper[4991]: I0929 09:55:35.993067 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.006411 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdw7\" (UniqueName: \"kubernetes.io/projected/64c94e01-ecee-47b4-ab4b-182085a9dce5-kube-api-access-pkdw7\") pod \"cinder-operator-controller-manager-748c574d75-bjlxb\" (UID: \"64c94e01-ecee-47b4-ab4b-182085a9dce5\") " pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.006589 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-8ff95898-2svcm"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.008037 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.012131 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lgqdx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.018591 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.020300 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.025515 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8bh2s" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.034335 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8ff95898-2svcm"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.052597 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.059408 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b6m\" (UniqueName: \"kubernetes.io/projected/861ab488-585c-407a-bfe6-97e8d01f20e6-kube-api-access-n9b6m\") pod \"designate-operator-controller-manager-7d74f4d695-c74zk\" (UID: \"861ab488-585c-407a-bfe6-97e8d01f20e6\") " pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.059622 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7v6\" (UniqueName: \"kubernetes.io/projected/9cdd1ad5-293a-456e-8313-e23a3140f8f5-kube-api-access-5z7v6\") pod \"horizon-operator-controller-manager-695847bc78-v4kx7\" (UID: \"9cdd1ad5-293a-456e-8313-e23a3140f8f5\") " pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.059702 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqww\" (UniqueName: \"kubernetes.io/projected/663f9d51-4014-4b2d-b44d-96dca010a1f4-kube-api-access-9fqww\") pod \"heat-operator-controller-manager-8ff95898-2svcm\" (UID: \"663f9d51-4014-4b2d-b44d-96dca010a1f4\") " pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.059778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwtgr\" (UniqueName: \"kubernetes.io/projected/f0b63357-310d-4b63-9ba9-212c0f3c6dd4-kube-api-access-nwtgr\") pod \"barbican-operator-controller-manager-6495d75b5-dpcb9\" (UID: \"f0b63357-310d-4b63-9ba9-212c0f3c6dd4\") " pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.059845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgsq\" (UniqueName: \"kubernetes.io/projected/55a2358d-e838-4581-906b-ec7f1a3117bf-kube-api-access-qxgsq\") pod \"glance-operator-controller-manager-67b5d44b7f-8969b\" (UID: \"55a2358d-e838-4581-906b-ec7f1a3117bf\") " pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.069267 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.070786 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.075577 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.077230 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.082462 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x25vk" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.082725 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.082969 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nsfdh" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.083296 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.098648 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.136912 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.139348 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.142015 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.142271 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dtsw4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.155398 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.157184 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.162682 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b6m\" (UniqueName: \"kubernetes.io/projected/861ab488-585c-407a-bfe6-97e8d01f20e6-kube-api-access-n9b6m\") pod \"designate-operator-controller-manager-7d74f4d695-c74zk\" (UID: \"861ab488-585c-407a-bfe6-97e8d01f20e6\") " pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.162768 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7v6\" (UniqueName: \"kubernetes.io/projected/9cdd1ad5-293a-456e-8313-e23a3140f8f5-kube-api-access-5z7v6\") pod \"horizon-operator-controller-manager-695847bc78-v4kx7\" (UID: \"9cdd1ad5-293a-456e-8313-e23a3140f8f5\") " pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.162807 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqww\" (UniqueName: \"kubernetes.io/projected/663f9d51-4014-4b2d-b44d-96dca010a1f4-kube-api-access-9fqww\") pod \"heat-operator-controller-manager-8ff95898-2svcm\" (UID: \"663f9d51-4014-4b2d-b44d-96dca010a1f4\") " pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.162849 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwtgr\" (UniqueName: \"kubernetes.io/projected/f0b63357-310d-4b63-9ba9-212c0f3c6dd4-kube-api-access-nwtgr\") pod \"barbican-operator-controller-manager-6495d75b5-dpcb9\" (UID: \"f0b63357-310d-4b63-9ba9-212c0f3c6dd4\") " pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.162873 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgsq\" (UniqueName: \"kubernetes.io/projected/55a2358d-e838-4581-906b-ec7f1a3117bf-kube-api-access-qxgsq\") pod \"glance-operator-controller-manager-67b5d44b7f-8969b\" (UID: \"55a2358d-e838-4581-906b-ec7f1a3117bf\") " pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.164668 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-n4dzg" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.194293 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.196940 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b6m\" (UniqueName: \"kubernetes.io/projected/861ab488-585c-407a-bfe6-97e8d01f20e6-kube-api-access-n9b6m\") pod \"designate-operator-controller-manager-7d74f4d695-c74zk\" (UID: \"861ab488-585c-407a-bfe6-97e8d01f20e6\") " pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.198942 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.200393 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqww\" (UniqueName: \"kubernetes.io/projected/663f9d51-4014-4b2d-b44d-96dca010a1f4-kube-api-access-9fqww\") pod \"heat-operator-controller-manager-8ff95898-2svcm\" (UID: \"663f9d51-4014-4b2d-b44d-96dca010a1f4\") " pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.203210 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7v6\" (UniqueName: \"kubernetes.io/projected/9cdd1ad5-293a-456e-8313-e23a3140f8f5-kube-api-access-5z7v6\") pod \"horizon-operator-controller-manager-695847bc78-v4kx7\" (UID: \"9cdd1ad5-293a-456e-8313-e23a3140f8f5\") " pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.214974 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgsq\" (UniqueName: \"kubernetes.io/projected/55a2358d-e838-4581-906b-ec7f1a3117bf-kube-api-access-qxgsq\") pod \"glance-operator-controller-manager-67b5d44b7f-8969b\" (UID: \"55a2358d-e838-4581-906b-ec7f1a3117bf\") " pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.215069 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.216930 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.237516 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.239004 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.239963 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.247072 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.250448 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p4dvl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.250699 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wz5nt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.251535 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwtgr\" (UniqueName: \"kubernetes.io/projected/f0b63357-310d-4b63-9ba9-212c0f3c6dd4-kube-api-access-nwtgr\") pod \"barbican-operator-controller-manager-6495d75b5-dpcb9\" (UID: \"f0b63357-310d-4b63-9ba9-212c0f3c6dd4\") " pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.256039 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.264257 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.265613 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.264582 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddczr\" (UniqueName: \"kubernetes.io/projected/2e6f46c0-8a81-4203-946a-9cc5d0217b02-kube-api-access-ddczr\") pod \"ironic-operator-controller-manager-9fc8d5567-6kksz\" (UID: \"2e6f46c0-8a81-4203-946a-9cc5d0217b02\") " pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.278620 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpx8\" (UniqueName: \"kubernetes.io/projected/76b136ef-8bd7-4695-807d-be77c22c87bd-kube-api-access-7tpx8\") pod \"keystone-operator-controller-manager-7bf498966c-bjf5h\" (UID: \"76b136ef-8bd7-4695-807d-be77c22c87bd\") " pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.278673 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.278846 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpjm\" (UniqueName: \"kubernetes.io/projected/248ebc15-872d-49f9-82e3-25814d7cc483-kube-api-access-lkpjm\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.280751 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcwx\" (UniqueName: \"kubernetes.io/projected/ffa64e52-6eef-46e8-a535-d9797274440b-kube-api-access-nxcwx\") pod \"manila-operator-controller-manager-56cf9c6b99-vvlp2\" (UID: \"ffa64e52-6eef-46e8-a535-d9797274440b\") " pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.287591 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-p2r7x" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.287813 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.290715 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.297844 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9wqqx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.317131 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.332237 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.341024 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.342411 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.342936 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.348595 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.349075 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hztxb" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.369007 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.370265 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.371796 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.374005 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7cf78" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.418611 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddczr\" (UniqueName: \"kubernetes.io/projected/2e6f46c0-8a81-4203-946a-9cc5d0217b02-kube-api-access-ddczr\") pod \"ironic-operator-controller-manager-9fc8d5567-6kksz\" (UID: \"2e6f46c0-8a81-4203-946a-9cc5d0217b02\") " pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419182 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419224 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz5v\" (UniqueName: \"kubernetes.io/projected/40de8749-3b71-4a0d-8483-ef2512644475-kube-api-access-qwz5v\") pod \"ovn-operator-controller-manager-5f95c46c78-54kjl\" (UID: \"40de8749-3b71-4a0d-8483-ef2512644475\") " pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419301 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tpx8\" (UniqueName: \"kubernetes.io/projected/76b136ef-8bd7-4695-807d-be77c22c87bd-kube-api-access-7tpx8\") pod \"keystone-operator-controller-manager-7bf498966c-bjf5h\" (UID: \"76b136ef-8bd7-4695-807d-be77c22c87bd\") " pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419354 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419518 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkpjm\" (UniqueName: \"kubernetes.io/projected/248ebc15-872d-49f9-82e3-25814d7cc483-kube-api-access-lkpjm\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419621 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjzn\" (UniqueName: \"kubernetes.io/projected/5406253f-041d-4ec4-a710-809c3d267e52-kube-api-access-lgjzn\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419689 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6d87\" (UniqueName: \"kubernetes.io/projected/269a0f66-05b9-4bda-9125-e13b1a9264dc-kube-api-access-t6d87\") pod \"mariadb-operator-controller-manager-687b9cf756-tldms\" (UID: \"269a0f66-05b9-4bda-9125-e13b1a9264dc\") " pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2h98\" (UniqueName: \"kubernetes.io/projected/9bac430d-9a3c-42e4-8aba-d175875a29ac-kube-api-access-m2h98\") pod \"nova-operator-controller-manager-c7c776c96-gxh2d\" (UID: \"9bac430d-9a3c-42e4-8aba-d175875a29ac\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419801 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcwx\" (UniqueName: \"kubernetes.io/projected/ffa64e52-6eef-46e8-a535-d9797274440b-kube-api-access-nxcwx\") pod \"manila-operator-controller-manager-56cf9c6b99-vvlp2\" (UID: \"ffa64e52-6eef-46e8-a535-d9797274440b\") " pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419849 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wltf\" (UniqueName: \"kubernetes.io/projected/43697432-9291-473b-add7-576d1db29307-kube-api-access-4wltf\") pod \"octavia-operator-controller-manager-76fcc6dc7c-2bddq\" (UID: \"43697432-9291-473b-add7-576d1db29307\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.419924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfqd\" (UniqueName: \"kubernetes.io/projected/9a454961-e67f-46ce-a5f0-f5bb3cff6b67-kube-api-access-fnfqd\") pod \"neutron-operator-controller-manager-54d766c9f9-57tlx\" (UID: \"9a454961-e67f-46ce-a5f0-f5bb3cff6b67\") " pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.435246 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:55:36 crc kubenswrapper[4991]: E0929 09:55:36.437731 4991 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 29 09:55:36 crc kubenswrapper[4991]: E0929 09:55:36.438073 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert podName:248ebc15-872d-49f9-82e3-25814d7cc483 nodeName:}" failed. No retries permitted until 2025-09-29 09:55:36.938027444 +0000 UTC m=+1072.793955472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert") pod "infra-operator-controller-manager-858cd69f49-lktsl" (UID: "248ebc15-872d-49f9-82e3-25814d7cc483") : secret "infra-operator-webhook-server-cert" not found Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.478465 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.483615 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcwx\" (UniqueName: \"kubernetes.io/projected/ffa64e52-6eef-46e8-a535-d9797274440b-kube-api-access-nxcwx\") pod \"manila-operator-controller-manager-56cf9c6b99-vvlp2\" (UID: \"ffa64e52-6eef-46e8-a535-d9797274440b\") " pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.488709 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.491415 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddczr\" (UniqueName: \"kubernetes.io/projected/2e6f46c0-8a81-4203-946a-9cc5d0217b02-kube-api-access-ddczr\") pod \"ironic-operator-controller-manager-9fc8d5567-6kksz\" (UID: \"2e6f46c0-8a81-4203-946a-9cc5d0217b02\") " pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.492629 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkpjm\" (UniqueName: \"kubernetes.io/projected/248ebc15-872d-49f9-82e3-25814d7cc483-kube-api-access-lkpjm\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.496629 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tpx8\" (UniqueName: \"kubernetes.io/projected/76b136ef-8bd7-4695-807d-be77c22c87bd-kube-api-access-7tpx8\") pod \"keystone-operator-controller-manager-7bf498966c-bjf5h\" (UID: \"76b136ef-8bd7-4695-807d-be77c22c87bd\") " pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.503020 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.519773 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.521420 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523218 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjzn\" (UniqueName: \"kubernetes.io/projected/5406253f-041d-4ec4-a710-809c3d267e52-kube-api-access-lgjzn\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523268 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6d87\" (UniqueName: \"kubernetes.io/projected/269a0f66-05b9-4bda-9125-e13b1a9264dc-kube-api-access-t6d87\") pod \"mariadb-operator-controller-manager-687b9cf756-tldms\" (UID: \"269a0f66-05b9-4bda-9125-e13b1a9264dc\") " pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523289 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2h98\" (UniqueName: \"kubernetes.io/projected/9bac430d-9a3c-42e4-8aba-d175875a29ac-kube-api-access-m2h98\") pod \"nova-operator-controller-manager-c7c776c96-gxh2d\" (UID: \"9bac430d-9a3c-42e4-8aba-d175875a29ac\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523324 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wltf\" (UniqueName: \"kubernetes.io/projected/43697432-9291-473b-add7-576d1db29307-kube-api-access-4wltf\") pod \"octavia-operator-controller-manager-76fcc6dc7c-2bddq\" (UID: \"43697432-9291-473b-add7-576d1db29307\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523356 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfqd\" (UniqueName: \"kubernetes.io/projected/9a454961-e67f-46ce-a5f0-f5bb3cff6b67-kube-api-access-fnfqd\") pod \"neutron-operator-controller-manager-54d766c9f9-57tlx\" (UID: \"9a454961-e67f-46ce-a5f0-f5bb3cff6b67\") " pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523388 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.523410 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwz5v\" (UniqueName: \"kubernetes.io/projected/40de8749-3b71-4a0d-8483-ef2512644475-kube-api-access-qwz5v\") pod \"ovn-operator-controller-manager-5f95c46c78-54kjl\" (UID: \"40de8749-3b71-4a0d-8483-ef2512644475\") " pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:55:36 crc kubenswrapper[4991]: E0929 09:55:36.523739 4991 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 29 09:55:36 crc kubenswrapper[4991]: E0929 09:55:36.523806 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert podName:5406253f-041d-4ec4-a710-809c3d267e52 nodeName:}" failed. No retries permitted until 2025-09-29 09:55:37.023788516 +0000 UTC m=+1072.879716544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-s7cf4" (UID: "5406253f-041d-4ec4-a710-809c3d267e52") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.539991 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.541376 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.544595 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-87bdf" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.553541 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nm8kx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.591999 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.611668 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2h98\" (UniqueName: \"kubernetes.io/projected/9bac430d-9a3c-42e4-8aba-d175875a29ac-kube-api-access-m2h98\") pod \"nova-operator-controller-manager-c7c776c96-gxh2d\" (UID: \"9bac430d-9a3c-42e4-8aba-d175875a29ac\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.616017 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.620656 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6d87\" (UniqueName: \"kubernetes.io/projected/269a0f66-05b9-4bda-9125-e13b1a9264dc-kube-api-access-t6d87\") pod \"mariadb-operator-controller-manager-687b9cf756-tldms\" (UID: \"269a0f66-05b9-4bda-9125-e13b1a9264dc\") " pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.622606 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwz5v\" (UniqueName: \"kubernetes.io/projected/40de8749-3b71-4a0d-8483-ef2512644475-kube-api-access-qwz5v\") pod \"ovn-operator-controller-manager-5f95c46c78-54kjl\" (UID: \"40de8749-3b71-4a0d-8483-ef2512644475\") " pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.623297 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfqd\" (UniqueName: \"kubernetes.io/projected/9a454961-e67f-46ce-a5f0-f5bb3cff6b67-kube-api-access-fnfqd\") pod \"neutron-operator-controller-manager-54d766c9f9-57tlx\" (UID: \"9a454961-e67f-46ce-a5f0-f5bb3cff6b67\") " pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.623580 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjzn\" (UniqueName: \"kubernetes.io/projected/5406253f-041d-4ec4-a710-809c3d267e52-kube-api-access-lgjzn\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.625456 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wltf\" (UniqueName: \"kubernetes.io/projected/43697432-9291-473b-add7-576d1db29307-kube-api-access-4wltf\") pod \"octavia-operator-controller-manager-76fcc6dc7c-2bddq\" (UID: \"43697432-9291-473b-add7-576d1db29307\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.625729 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.626746 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpbd\" (UniqueName: \"kubernetes.io/projected/36d942bf-7f08-474b-9386-41b1a4d32e01-kube-api-access-ccpbd\") pod \"swift-operator-controller-manager-bc7dc7bd9-k2q5h\" (UID: \"36d942bf-7f08-474b-9386-41b1a4d32e01\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.626799 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhj5\" (UniqueName: \"kubernetes.io/projected/b33b3e90-d2bc-4c24-bc6e-8398c15f597d-kube-api-access-mxhj5\") pod \"placement-operator-controller-manager-774b97b48-bxq6l\" (UID: \"b33b3e90-d2bc-4c24-bc6e-8398c15f597d\") " pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.639333 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.653585 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.654009 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.655849 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.657701 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.672386 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qw7rg" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.705291 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.707205 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.730183 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6lf\" (UniqueName: \"kubernetes.io/projected/927b3fac-f2e5-4009-9991-14615d5d0cc7-kube-api-access-7d6lf\") pod \"telemetry-operator-controller-manager-889c789f9-9lbgt\" (UID: \"927b3fac-f2e5-4009-9991-14615d5d0cc7\") " pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.730283 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpbd\" (UniqueName: \"kubernetes.io/projected/36d942bf-7f08-474b-9386-41b1a4d32e01-kube-api-access-ccpbd\") pod \"swift-operator-controller-manager-bc7dc7bd9-k2q5h\" (UID: \"36d942bf-7f08-474b-9386-41b1a4d32e01\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.730335 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhj5\" (UniqueName: \"kubernetes.io/projected/b33b3e90-d2bc-4c24-bc6e-8398c15f597d-kube-api-access-mxhj5\") pod \"placement-operator-controller-manager-774b97b48-bxq6l\" (UID: \"b33b3e90-d2bc-4c24-bc6e-8398c15f597d\") " pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.753323 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.787106 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-f94sw"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.788398 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpbd\" (UniqueName: \"kubernetes.io/projected/36d942bf-7f08-474b-9386-41b1a4d32e01-kube-api-access-ccpbd\") pod \"swift-operator-controller-manager-bc7dc7bd9-k2q5h\" (UID: \"36d942bf-7f08-474b-9386-41b1a4d32e01\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.788476 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.789658 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.792790 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dsb5j" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.796591 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhj5\" (UniqueName: \"kubernetes.io/projected/b33b3e90-d2bc-4c24-bc6e-8398c15f597d-kube-api-access-mxhj5\") pod \"placement-operator-controller-manager-774b97b48-bxq6l\" (UID: \"b33b3e90-d2bc-4c24-bc6e-8398c15f597d\") " pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.796658 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-f94sw"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.831594 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7lx\" (UniqueName: \"kubernetes.io/projected/51c758a6-817d-41d2-958b-2945bda7e082-kube-api-access-fn7lx\") pod \"test-operator-controller-manager-f66b554c6-f94sw\" (UID: \"51c758a6-817d-41d2-958b-2945bda7e082\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.831666 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d6lf\" (UniqueName: \"kubernetes.io/projected/927b3fac-f2e5-4009-9991-14615d5d0cc7-kube-api-access-7d6lf\") pod \"telemetry-operator-controller-manager-889c789f9-9lbgt\" (UID: \"927b3fac-f2e5-4009-9991-14615d5d0cc7\") " pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.836050 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.837431 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.847480 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.855562 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d6lf\" (UniqueName: \"kubernetes.io/projected/927b3fac-f2e5-4009-9991-14615d5d0cc7-kube-api-access-7d6lf\") pod \"telemetry-operator-controller-manager-889c789f9-9lbgt\" (UID: \"927b3fac-f2e5-4009-9991-14615d5d0cc7\") " pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.857107 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-drtnc" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.873477 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.882691 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.893464 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.907085 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.935077 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5x4\" (UniqueName: \"kubernetes.io/projected/26d1e560-b3f8-43ed-af68-4834bf60e6e3-kube-api-access-4v5x4\") pod \"watcher-operator-controller-manager-76669f99c-chqdw\" (UID: \"26d1e560-b3f8-43ed-af68-4834bf60e6e3\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.936109 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7lx\" (UniqueName: \"kubernetes.io/projected/51c758a6-817d-41d2-958b-2945bda7e082-kube-api-access-fn7lx\") pod \"test-operator-controller-manager-f66b554c6-f94sw\" (UID: \"51c758a6-817d-41d2-958b-2945bda7e082\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.992352 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm"] Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.993852 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:36 crc kubenswrapper[4991]: I0929 09:55:36.999634 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7lx\" (UniqueName: \"kubernetes.io/projected/51c758a6-817d-41d2-958b-2945bda7e082-kube-api-access-fn7lx\") pod \"test-operator-controller-manager-f66b554c6-f94sw\" (UID: \"51c758a6-817d-41d2-958b-2945bda7e082\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.000594 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qln8r" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.026589 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.043644 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.043694 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845tw\" (UniqueName: \"kubernetes.io/projected/1a937ee6-10b5-419c-8bab-ca067ab45efa-kube-api-access-845tw\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.043722 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.043744 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5x4\" (UniqueName: \"kubernetes.io/projected/26d1e560-b3f8-43ed-af68-4834bf60e6e3-kube-api-access-4v5x4\") pod \"watcher-operator-controller-manager-76669f99c-chqdw\" (UID: \"26d1e560-b3f8-43ed-af68-4834bf60e6e3\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.043789 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: E0929 09:55:37.044630 4991 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 29 09:55:37 crc kubenswrapper[4991]: E0929 09:55:37.044675 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert podName:5406253f-041d-4ec4-a710-809c3d267e52 nodeName:}" failed. No retries permitted until 2025-09-29 09:55:38.044661855 +0000 UTC m=+1073.900589883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-s7cf4" (UID: "5406253f-041d-4ec4-a710-809c3d267e52") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.055684 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm"] Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.087368 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/248ebc15-872d-49f9-82e3-25814d7cc483-cert\") pod \"infra-operator-controller-manager-858cd69f49-lktsl\" (UID: \"248ebc15-872d-49f9-82e3-25814d7cc483\") " pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.090987 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm"] Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.092110 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.094583 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5x4\" (UniqueName: \"kubernetes.io/projected/26d1e560-b3f8-43ed-af68-4834bf60e6e3-kube-api-access-4v5x4\") pod \"watcher-operator-controller-manager-76669f99c-chqdw\" (UID: \"26d1e560-b3f8-43ed-af68-4834bf60e6e3\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.117577 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nj766" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.130796 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm"] Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.145900 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845tw\" (UniqueName: \"kubernetes.io/projected/1a937ee6-10b5-419c-8bab-ca067ab45efa-kube-api-access-845tw\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.145994 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.146057 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc45q\" (UniqueName: \"kubernetes.io/projected/794146c2-be69-4a63-a485-7435c27e9f14-kube-api-access-vc45q\") pod \"rabbitmq-cluster-operator-manager-79d8469568-gsgkm\" (UID: \"794146c2-be69-4a63-a485-7435c27e9f14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" Sep 29 09:55:37 crc kubenswrapper[4991]: E0929 09:55:37.146418 4991 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Sep 29 09:55:37 crc kubenswrapper[4991]: E0929 09:55:37.146460 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert podName:1a937ee6-10b5-419c-8bab-ca067ab45efa nodeName:}" failed. No retries permitted until 2025-09-29 09:55:37.646445328 +0000 UTC m=+1073.502373356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert") pod "openstack-operator-controller-manager-98b5bb4f8-zkjlm" (UID: "1a937ee6-10b5-419c-8bab-ca067ab45efa") : secret "webhook-server-cert" not found Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.179465 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845tw\" (UniqueName: \"kubernetes.io/projected/1a937ee6-10b5-419c-8bab-ca067ab45efa-kube-api-access-845tw\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.240981 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.275902 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc45q\" (UniqueName: \"kubernetes.io/projected/794146c2-be69-4a63-a485-7435c27e9f14-kube-api-access-vc45q\") pod \"rabbitmq-cluster-operator-manager-79d8469568-gsgkm\" (UID: \"794146c2-be69-4a63-a485-7435c27e9f14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.282122 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.282557 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb"] Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.304247 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc45q\" (UniqueName: \"kubernetes.io/projected/794146c2-be69-4a63-a485-7435c27e9f14-kube-api-access-vc45q\") pod \"rabbitmq-cluster-operator-manager-79d8469568-gsgkm\" (UID: \"794146c2-be69-4a63-a485-7435c27e9f14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.316125 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.330483 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.512023 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" event={"ID":"64c94e01-ecee-47b4-ab4b-182085a9dce5","Type":"ContainerStarted","Data":"752ade7f66d9980958aabac625e20bd12dd7002319b0932f3b4bf2b5fccae83d"} Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.578107 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk"] Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.687420 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.693777 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a937ee6-10b5-419c-8bab-ca067ab45efa-cert\") pod \"openstack-operator-controller-manager-98b5bb4f8-zkjlm\" (UID: \"1a937ee6-10b5-419c-8bab-ca067ab45efa\") " pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:37 crc kubenswrapper[4991]: I0929 09:55:37.853730 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.098546 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.108658 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5406253f-041d-4ec4-a710-809c3d267e52-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-s7cf4\" (UID: \"5406253f-041d-4ec4-a710-809c3d267e52\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.189250 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d"] Sep 29 09:55:38 crc kubenswrapper[4991]: W0929 09:55:38.213424 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bac430d_9a3c_42e4_8aba_d175875a29ac.slice/crio-3bef1632321a1ebb5e56e4e7aa414d721eb4263437cfc51e682cfba6330962d3 WatchSource:0}: Error finding container 3bef1632321a1ebb5e56e4e7aa414d721eb4263437cfc51e682cfba6330962d3: Status 404 returned error can't find the container with id 3bef1632321a1ebb5e56e4e7aa414d721eb4263437cfc51e682cfba6330962d3 Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.273450 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.276068 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9"] Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.304723 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b"] Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.326605 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8ff95898-2svcm"] Sep 29 09:55:38 crc kubenswrapper[4991]: W0929 09:55:38.333795 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a2358d_e838_4581_906b_ec7f1a3117bf.slice/crio-cfdbb19023b5fecca9946d744296c045969ada818d87c746542b52576eb92c03 WatchSource:0}: Error finding container cfdbb19023b5fecca9946d744296c045969ada818d87c746542b52576eb92c03: Status 404 returned error can't find the container with id cfdbb19023b5fecca9946d744296c045969ada818d87c746542b52576eb92c03 Sep 29 09:55:38 crc kubenswrapper[4991]: W0929 09:55:38.352259 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663f9d51_4014_4b2d_b44d_96dca010a1f4.slice/crio-e5df131005dcf36f3fdbd9535037a5d03d0e73b48b6848d671534bffa8b6c1bd WatchSource:0}: Error finding container e5df131005dcf36f3fdbd9535037a5d03d0e73b48b6848d671534bffa8b6c1bd: Status 404 returned error can't find the container with id e5df131005dcf36f3fdbd9535037a5d03d0e73b48b6848d671534bffa8b6c1bd Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.379689 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7"] Sep 29 09:55:38 crc kubenswrapper[4991]: W0929 09:55:38.394832 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cdd1ad5_293a_456e_8313_e23a3140f8f5.slice/crio-4f73434c83ef828a5cecc47b1f5ebc6dfaadfa01825e7af5144a94c4db8bac67 WatchSource:0}: Error finding container 4f73434c83ef828a5cecc47b1f5ebc6dfaadfa01825e7af5144a94c4db8bac67: Status 404 returned error can't find the container with id 4f73434c83ef828a5cecc47b1f5ebc6dfaadfa01825e7af5144a94c4db8bac67 Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.542225 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" event={"ID":"861ab488-585c-407a-bfe6-97e8d01f20e6","Type":"ContainerStarted","Data":"0641ce4ab088b8282a49049950b8aa1ff00c99bc2524c2c785031fc317d70f29"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.555369 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" event={"ID":"55a2358d-e838-4581-906b-ec7f1a3117bf","Type":"ContainerStarted","Data":"cfdbb19023b5fecca9946d744296c045969ada818d87c746542b52576eb92c03"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.558994 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" event={"ID":"9bac430d-9a3c-42e4-8aba-d175875a29ac","Type":"ContainerStarted","Data":"3bef1632321a1ebb5e56e4e7aa414d721eb4263437cfc51e682cfba6330962d3"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.564612 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" event={"ID":"663f9d51-4014-4b2d-b44d-96dca010a1f4","Type":"ContainerStarted","Data":"e5df131005dcf36f3fdbd9535037a5d03d0e73b48b6848d671534bffa8b6c1bd"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.568683 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" event={"ID":"f0b63357-310d-4b63-9ba9-212c0f3c6dd4","Type":"ContainerStarted","Data":"4fcec57cfe603969feeb6c7a09242da606c7b7833535a290f1c8625858ed01c1"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.569354 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" event={"ID":"9cdd1ad5-293a-456e-8313-e23a3140f8f5","Type":"ContainerStarted","Data":"4f73434c83ef828a5cecc47b1f5ebc6dfaadfa01825e7af5144a94c4db8bac67"} Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.609816 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2"] Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.738137 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz"] Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.761424 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl"] Sep 29 09:55:38 crc kubenswrapper[4991]: I0929 09:55:38.775566 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.531501 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.587828 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" event={"ID":"76b136ef-8bd7-4695-807d-be77c22c87bd","Type":"ContainerStarted","Data":"79df94f2ec4192a36f9db4eb2dc79bf112191451fac6764998ed9bd76c4560e8"} Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.597664 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" event={"ID":"9a454961-e67f-46ce-a5f0-f5bb3cff6b67","Type":"ContainerStarted","Data":"b2cc164f6645327cc0d832d25a66a2eb7111b2a4968e8e94caac9eef30817416"} Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.615932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" event={"ID":"ffa64e52-6eef-46e8-a535-d9797274440b","Type":"ContainerStarted","Data":"9b905aa13b48a7615c135adb76e3e92ee973f4b5ffeb14b637b4d4625cf3ed22"} Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.628605 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.635591 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.637162 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" event={"ID":"2e6f46c0-8a81-4203-946a-9cc5d0217b02","Type":"ContainerStarted","Data":"89017d9bbf6beb883137d2743413ddda4e7cfec61156b474cbf25358174741ea"} Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.650086 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" event={"ID":"40de8749-3b71-4a0d-8483-ef2512644475","Type":"ContainerStarted","Data":"653f9a8055095ff078fd46a93f5da721e47e3264c6ff12c282438686e1f5172a"} Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.672456 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.758360 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.778018 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt"] Sep 29 09:55:39 crc kubenswrapper[4991]: W0929 09:55:39.791372 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d942bf_7f08_474b_9386_41b1a4d32e01.slice/crio-c673ef2539ef9a10a84a7f05bde029024add797f666194fc030677543a26977b WatchSource:0}: Error finding container c673ef2539ef9a10a84a7f05bde029024add797f666194fc030677543a26977b: Status 404 returned error can't find the container with id c673ef2539ef9a10a84a7f05bde029024add797f666194fc030677543a26977b Sep 29 09:55:39 crc kubenswrapper[4991]: W0929 09:55:39.792309 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26d1e560_b3f8_43ed_af68_4834bf60e6e3.slice/crio-bcc4aa26c398cc7172ba7f70504e6a4f282c57e7c7a3ad0274832942142b8fb4 WatchSource:0}: Error finding container bcc4aa26c398cc7172ba7f70504e6a4f282c57e7c7a3ad0274832942142b8fb4: Status 404 returned error can't find the container with id bcc4aa26c398cc7172ba7f70504e6a4f282c57e7c7a3ad0274832942142b8fb4 Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.804742 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l"] Sep 29 09:55:39 crc kubenswrapper[4991]: W0929 09:55:39.810332 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269a0f66_05b9_4bda_9125_e13b1a9264dc.slice/crio-40196596ab0fddae74855f09c12b2ed3e0419224ec5b472eff21f2458294d449 WatchSource:0}: Error finding container 40196596ab0fddae74855f09c12b2ed3e0419224ec5b472eff21f2458294d449: Status 404 returned error can't find the container with id 40196596ab0fddae74855f09c12b2ed3e0419224ec5b472eff21f2458294d449 Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.843117 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl"] Sep 29 09:55:39 crc kubenswrapper[4991]: E0929 09:55:39.864583 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:87a522d480797f54499bcd1c4a48837e1b17c33d4cc43e99ed7a53b8cedb17c7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-858cd69f49-lktsl_openstack-operators(248ebc15-872d-49f9-82e3-25814d7cc483): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.866829 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.888983 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-f94sw"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.912009 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms"] Sep 29 09:55:39 crc kubenswrapper[4991]: I0929 09:55:39.923743 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4"] Sep 29 09:55:39 crc kubenswrapper[4991]: W0929 09:55:39.941360 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5406253f_041d_4ec4_a710_809c3d267e52.slice/crio-0b16a5fcf264791d7ebca341f438779c7024d5aa3b2ca74d9191b8aae8ef2667 WatchSource:0}: Error finding container 0b16a5fcf264791d7ebca341f438779c7024d5aa3b2ca74d9191b8aae8ef2667: Status 404 returned error can't find the container with id 0b16a5fcf264791d7ebca341f438779c7024d5aa3b2ca74d9191b8aae8ef2667 Sep 29 09:55:39 crc kubenswrapper[4991]: E0929 09:55:39.959590 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_LIGHTSPEED_IMAGE_URL_DEFAULT,Value:quay.io/openstack-lightspeed/rag-content:os-docs-2024.2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgjzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6d776955-s7cf4_openstack-operators(5406253f-041d-4ec4-a710-809c3d267e52): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 29 09:55:40 crc kubenswrapper[4991]: E0929 09:55:40.220292 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" podUID="248ebc15-872d-49f9-82e3-25814d7cc483" Sep 29 09:55:40 crc kubenswrapper[4991]: E0929 09:55:40.255565 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" podUID="5406253f-041d-4ec4-a710-809c3d267e52" Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.686173 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" event={"ID":"51c758a6-817d-41d2-958b-2945bda7e082","Type":"ContainerStarted","Data":"e1f9e5f421610aa44c45f845dfb379a227f20e0da5e3a9486b32ea906f5bd13c"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.687595 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" event={"ID":"26d1e560-b3f8-43ed-af68-4834bf60e6e3","Type":"ContainerStarted","Data":"bcc4aa26c398cc7172ba7f70504e6a4f282c57e7c7a3ad0274832942142b8fb4"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.688891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" event={"ID":"794146c2-be69-4a63-a485-7435c27e9f14","Type":"ContainerStarted","Data":"a738407940f1c3f4870d9bf84b081594c681b9c6ed5dfcafb6b51dc93695e641"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.697207 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" event={"ID":"248ebc15-872d-49f9-82e3-25814d7cc483","Type":"ContainerStarted","Data":"0be110c4c3197ba379e01572a6c0f7f07c908534fc2115061d2625cb3fe02ce2"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.697257 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" event={"ID":"248ebc15-872d-49f9-82e3-25814d7cc483","Type":"ContainerStarted","Data":"457cea08cc7e979dd179f97225c69a93b9338bb4fcfe0565223de34119bd242f"} Sep 29 09:55:40 crc kubenswrapper[4991]: E0929 09:55:40.704086 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:87a522d480797f54499bcd1c4a48837e1b17c33d4cc43e99ed7a53b8cedb17c7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" podUID="248ebc15-872d-49f9-82e3-25814d7cc483" Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.706124 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" event={"ID":"269a0f66-05b9-4bda-9125-e13b1a9264dc","Type":"ContainerStarted","Data":"40196596ab0fddae74855f09c12b2ed3e0419224ec5b472eff21f2458294d449"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.718970 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" event={"ID":"1a937ee6-10b5-419c-8bab-ca067ab45efa","Type":"ContainerStarted","Data":"05c8ff6ec8848fd12ddab8bb004ace601a97264e0adb75dcd26d95d366026e3b"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.719040 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" event={"ID":"1a937ee6-10b5-419c-8bab-ca067ab45efa","Type":"ContainerStarted","Data":"6380432939a1a9ca5eae2123f34bbe50e3605925e9d3d8ace5da36a3a1d6e8fb"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.719077 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" event={"ID":"1a937ee6-10b5-419c-8bab-ca067ab45efa","Type":"ContainerStarted","Data":"6d0748fd47f63562c938411c7e27a6d5a915e93ca61d9239884ae445e2c4215e"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.719637 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.724281 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" event={"ID":"36d942bf-7f08-474b-9386-41b1a4d32e01","Type":"ContainerStarted","Data":"c673ef2539ef9a10a84a7f05bde029024add797f666194fc030677543a26977b"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.726518 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" event={"ID":"5406253f-041d-4ec4-a710-809c3d267e52","Type":"ContainerStarted","Data":"330ffa2185dce952995e7090d4060f193232004c6583209590845167b255586a"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.726572 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" event={"ID":"5406253f-041d-4ec4-a710-809c3d267e52","Type":"ContainerStarted","Data":"0b16a5fcf264791d7ebca341f438779c7024d5aa3b2ca74d9191b8aae8ef2667"} Sep 29 09:55:40 crc kubenswrapper[4991]: E0929 09:55:40.733782 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" podUID="5406253f-041d-4ec4-a710-809c3d267e52" Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.746461 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" event={"ID":"b33b3e90-d2bc-4c24-bc6e-8398c15f597d","Type":"ContainerStarted","Data":"b5443995e0aae6abac5a5432fe7f58163d7781bf44dae3ea670232c68f92eaba"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.751732 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" event={"ID":"43697432-9291-473b-add7-576d1db29307","Type":"ContainerStarted","Data":"2119683e4ecc94410912d27211c65a07fe5e85086bceddb98bfcb6bee1bbc4d9"} Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.782022 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" podStartSLOduration=4.782001966 podStartE2EDuration="4.782001966s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:55:40.771991753 +0000 UTC m=+1076.627919781" watchObservedRunningTime="2025-09-29 09:55:40.782001966 +0000 UTC m=+1076.637929994" Sep 29 09:55:40 crc kubenswrapper[4991]: I0929 09:55:40.784419 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" event={"ID":"927b3fac-f2e5-4009-9991-14615d5d0cc7","Type":"ContainerStarted","Data":"2a297090f6b99ddee87233d8b7ddebb3bafab9a6429634cca4d99183b29d7a20"} Sep 29 09:55:41 crc kubenswrapper[4991]: E0929 09:55:41.803160 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:87a522d480797f54499bcd1c4a48837e1b17c33d4cc43e99ed7a53b8cedb17c7\\\"\"" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" podUID="248ebc15-872d-49f9-82e3-25814d7cc483" Sep 29 09:55:41 crc kubenswrapper[4991]: E0929 09:55:41.809964 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" podUID="5406253f-041d-4ec4-a710-809c3d267e52" Sep 29 09:55:47 crc kubenswrapper[4991]: I0929 09:55:47.861614 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-98b5bb4f8-zkjlm" Sep 29 09:56:01 crc kubenswrapper[4991]: E0929 09:56:01.109177 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:057de94f9afa340adc34f9b25f8007d9cd2ba71bc8b5d77aac522add53b7caef" Sep 29 09:56:01 crc kubenswrapper[4991]: E0929 09:56:01.109937 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:057de94f9afa340adc34f9b25f8007d9cd2ba71bc8b5d77aac522add53b7caef,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m2h98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-c7c776c96-gxh2d_openstack-operators(9bac430d-9a3c-42e4-8aba-d175875a29ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:01 crc kubenswrapper[4991]: E0929 09:56:01.532890 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:f3d8f19fdacecd967319b843a048e3be334bc69f486bd64b56238e90e5ce461a" Sep 29 09:56:01 crc kubenswrapper[4991]: E0929 09:56:01.533423 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:f3d8f19fdacecd967319b843a048e3be334bc69f486bd64b56238e90e5ce461a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9b6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d74f4d695-c74zk_openstack-operators(861ab488-585c-407a-bfe6-97e8d01f20e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:03 crc kubenswrapper[4991]: E0929 09:56:03.348201 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:26db59a990341558d29c00da7503b2c5b9a415db8cc04a0006f198f30ec016d4" Sep 29 09:56:03 crc kubenswrapper[4991]: E0929 09:56:03.348692 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:26db59a990341558d29c00da7503b2c5b9a415db8cc04a0006f198f30ec016d4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwz5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5f95c46c78-54kjl_openstack-operators(40de8749-3b71-4a0d-8483-ef2512644475): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:04 crc kubenswrapper[4991]: E0929 09:56:04.138540 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8" Sep 29 09:56:04 crc kubenswrapper[4991]: E0929 09:56:04.138703 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4wltf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-76fcc6dc7c-2bddq_openstack-operators(43697432-9291-473b-add7-576d1db29307): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:04 crc kubenswrapper[4991]: E0929 09:56:04.535845 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:21cbcae033ffacad33656192f6d2cb7db8502177af45fc7af4d27a79f50982c9" Sep 29 09:56:04 crc kubenswrapper[4991]: E0929 09:56:04.536340 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:21cbcae033ffacad33656192f6d2cb7db8502177af45fc7af4d27a79f50982c9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5z7v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-695847bc78-v4kx7_openstack-operators(9cdd1ad5-293a-456e-8313-e23a3140f8f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:05 crc kubenswrapper[4991]: E0929 09:56:05.222052 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:c1ea5a0c814923293bcc2f1c82c5aa7a25c3e65bfc63c9c3b81f88558e256d93" Sep 29 09:56:05 crc kubenswrapper[4991]: E0929 09:56:05.222254 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:c1ea5a0c814923293bcc2f1c82c5aa7a25c3e65bfc63c9c3b81f88558e256d93,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6d87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-687b9cf756-tldms_openstack-operators(269a0f66-05b9-4bda-9125-e13b1a9264dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.379027 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:d2eba62b82728578c57f60de5baa3562bc0a355f65123a9e5fedff385988eb64" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.379194 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:d2eba62b82728578c57f60de5baa3562bc0a355f65123a9e5fedff385988eb64,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxcwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-56cf9c6b99-vvlp2_openstack-operators(ffa64e52-6eef-46e8-a535-d9797274440b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.495643 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.15:5001/openstack-k8s-operators/telemetry-operator:fb90e2d5d5450ef9908e16413d7df8331d5162ce" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.495901 4991 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.15:5001/openstack-k8s-operators/telemetry-operator:fb90e2d5d5450ef9908e16413d7df8331d5162ce" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.496069 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.15:5001/openstack-k8s-operators/telemetry-operator:fb90e2d5d5450ef9908e16413d7df8331d5162ce,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d6lf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-889c789f9-9lbgt_openstack-operators(927b3fac-f2e5-4009-9991-14615d5d0cc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.884662 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.884806 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vc45q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-79d8469568-gsgkm_openstack-operators(794146c2-be69-4a63-a485-7435c27e9f14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:56:07 crc kubenswrapper[4991]: E0929 09:56:07.886218 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" podUID="794146c2-be69-4a63-a485-7435c27e9f14" Sep 29 09:56:08 crc kubenswrapper[4991]: E0929 09:56:08.028463 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" podUID="794146c2-be69-4a63-a485-7435c27e9f14" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.077578 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" podUID="9bac430d-9a3c-42e4-8aba-d175875a29ac" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.553700 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" podUID="40de8749-3b71-4a0d-8483-ef2512644475" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.572893 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" podUID="269a0f66-05b9-4bda-9125-e13b1a9264dc" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.663432 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" podUID="9cdd1ad5-293a-456e-8313-e23a3140f8f5" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.663990 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" podUID="43697432-9291-473b-add7-576d1db29307" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.665226 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" podUID="861ab488-585c-407a-bfe6-97e8d01f20e6" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.665634 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" podUID="927b3fac-f2e5-4009-9991-14615d5d0cc7" Sep 29 09:56:09 crc kubenswrapper[4991]: E0929 09:56:09.691984 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" podUID="ffa64e52-6eef-46e8-a535-d9797274440b" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.072615 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" event={"ID":"36d942bf-7f08-474b-9386-41b1a4d32e01","Type":"ContainerStarted","Data":"4ea04946d2ab2b70c02edd4ca845c38a082e25fab68ad64444ba8a932b11c414"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.081273 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" event={"ID":"861ab488-585c-407a-bfe6-97e8d01f20e6","Type":"ContainerStarted","Data":"14db6223ed64de103256b5526b3b9c95cca6c838febf54bba4ca4f9e8763894e"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.084090 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:f3d8f19fdacecd967319b843a048e3be334bc69f486bd64b56238e90e5ce461a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" podUID="861ab488-585c-407a-bfe6-97e8d01f20e6" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.085269 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" event={"ID":"55a2358d-e838-4581-906b-ec7f1a3117bf","Type":"ContainerStarted","Data":"b07c377a61e421933016ce14c513769e5f7d0c6bae96abfdc417d7204ef5374f"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.086932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" event={"ID":"9bac430d-9a3c-42e4-8aba-d175875a29ac","Type":"ContainerStarted","Data":"b66a738b53b990365882c44570924000d7b72b2f3e4afa0879fe4a710a0f153d"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.093666 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" event={"ID":"51c758a6-817d-41d2-958b-2945bda7e082","Type":"ContainerStarted","Data":"3f02970580dfe0c4913c092db23ef1f5e343a9f09154b7eed96f53c16b319202"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.113059 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" event={"ID":"2e6f46c0-8a81-4203-946a-9cc5d0217b02","Type":"ContainerStarted","Data":"d8257490606d3c54ff23e736ef5653845c5ba9412b67337a0140dcf65ba3feb8"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.130960 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:057de94f9afa340adc34f9b25f8007d9cd2ba71bc8b5d77aac522add53b7caef\\\"\"" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" podUID="9bac430d-9a3c-42e4-8aba-d175875a29ac" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.179966 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" event={"ID":"9cdd1ad5-293a-456e-8313-e23a3140f8f5","Type":"ContainerStarted","Data":"e2e2db3479fa8c3110db6239d5bb76e6313354fd8507e5ed117b018dc0061775"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.182178 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:21cbcae033ffacad33656192f6d2cb7db8502177af45fc7af4d27a79f50982c9\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" podUID="9cdd1ad5-293a-456e-8313-e23a3140f8f5" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.182592 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" event={"ID":"269a0f66-05b9-4bda-9125-e13b1a9264dc","Type":"ContainerStarted","Data":"6b5bf1d6f22cae1b4dab924cad3649f591d17b900cf08ae2d625f3f8b7aef3c3"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.186852 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c1ea5a0c814923293bcc2f1c82c5aa7a25c3e65bfc63c9c3b81f88558e256d93\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" podUID="269a0f66-05b9-4bda-9125-e13b1a9264dc" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.201326 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" event={"ID":"76b136ef-8bd7-4695-807d-be77c22c87bd","Type":"ContainerStarted","Data":"7948131353b7e4e2789ed942e47488665a593bd6a6a759dc7da30ce6e3157a26"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.216311 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" event={"ID":"927b3fac-f2e5-4009-9991-14615d5d0cc7","Type":"ContainerStarted","Data":"bb910dc1d2f287380e040836f78ff8ac5ad629b872f88711cb3bb126f81cc1a9"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.218866 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.15:5001/openstack-k8s-operators/telemetry-operator:fb90e2d5d5450ef9908e16413d7df8331d5162ce\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" podUID="927b3fac-f2e5-4009-9991-14615d5d0cc7" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.222705 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" event={"ID":"40de8749-3b71-4a0d-8483-ef2512644475","Type":"ContainerStarted","Data":"443eef031ad78dd093535366a30338117c587291844fe8a80182f21c4cfb2b3d"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.227815 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:26db59a990341558d29c00da7503b2c5b9a415db8cc04a0006f198f30ec016d4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" podUID="40de8749-3b71-4a0d-8483-ef2512644475" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.253213 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" event={"ID":"248ebc15-872d-49f9-82e3-25814d7cc483","Type":"ContainerStarted","Data":"2a194d85aebc5d227c3b82939a2d1ea2d9ccce45c197084b532961c2761cd66b"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.253930 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.270147 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" event={"ID":"43697432-9291-473b-add7-576d1db29307","Type":"ContainerStarted","Data":"7d0981ab9607338b91a4830cb117597a180f2435a92ab885081d45e73e80b38d"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.275312 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" podUID="43697432-9291-473b-add7-576d1db29307" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.278993 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" event={"ID":"663f9d51-4014-4b2d-b44d-96dca010a1f4","Type":"ContainerStarted","Data":"bf4fea858b6144795adafc9b565b3dc49891b2d28e2474a2b2acd6e8bbf975f0"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.294934 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" event={"ID":"ffa64e52-6eef-46e8-a535-d9797274440b","Type":"ContainerStarted","Data":"2fec168d37ffc78af6948e3ae7a1e924396918ef60d3a2ead9f5618acee114cb"} Sep 29 09:56:10 crc kubenswrapper[4991]: E0929 09:56:10.297466 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d2eba62b82728578c57f60de5baa3562bc0a355f65123a9e5fedff385988eb64\\\"\"" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" podUID="ffa64e52-6eef-46e8-a535-d9797274440b" Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.313265 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" event={"ID":"64c94e01-ecee-47b4-ab4b-182085a9dce5","Type":"ContainerStarted","Data":"4b1fd06512ae57c12ddaf1879f0b05cc1eac8e8ea8b8373f6adc08a82af7c929"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.338385 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" event={"ID":"f0b63357-310d-4b63-9ba9-212c0f3c6dd4","Type":"ContainerStarted","Data":"8827c083fddf3422b0fb3d67be8881f77b2bfcd10aae83d769bdc200577a352f"} Sep 29 09:56:10 crc kubenswrapper[4991]: I0929 09:56:10.358920 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" podStartSLOduration=6.061940012 podStartE2EDuration="35.358892724s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.864377438 +0000 UTC m=+1075.720305466" lastFinishedPulling="2025-09-29 09:56:09.16133013 +0000 UTC m=+1105.017258178" observedRunningTime="2025-09-29 09:56:10.358229067 +0000 UTC m=+1106.214157095" watchObservedRunningTime="2025-09-29 09:56:10.358892724 +0000 UTC m=+1106.214820752" Sep 29 09:56:11 crc kubenswrapper[4991]: I0929 09:56:11.385477 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" event={"ID":"b33b3e90-d2bc-4c24-bc6e-8398c15f597d","Type":"ContainerStarted","Data":"8819fd81fad4435a84cde2949878788d72c1e67b3f469ee74645123d001add3a"} Sep 29 09:56:11 crc kubenswrapper[4991]: I0929 09:56:11.406120 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" event={"ID":"26d1e560-b3f8-43ed-af68-4834bf60e6e3","Type":"ContainerStarted","Data":"a5d2b9d2e13c8c2aeb4bb0d1ed6d773058cf69d548dca031b2888220b4000743"} Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411173 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:c1ea5a0c814923293bcc2f1c82c5aa7a25c3e65bfc63c9c3b81f88558e256d93\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" podUID="269a0f66-05b9-4bda-9125-e13b1a9264dc" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411534 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:f3d8f19fdacecd967319b843a048e3be334bc69f486bd64b56238e90e5ce461a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" podUID="861ab488-585c-407a-bfe6-97e8d01f20e6" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411569 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:21cbcae033ffacad33656192f6d2cb7db8502177af45fc7af4d27a79f50982c9\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" podUID="9cdd1ad5-293a-456e-8313-e23a3140f8f5" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411607 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d2eba62b82728578c57f60de5baa3562bc0a355f65123a9e5fedff385988eb64\\\"\"" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" podUID="ffa64e52-6eef-46e8-a535-d9797274440b" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411676 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" podUID="43697432-9291-473b-add7-576d1db29307" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411724 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:26db59a990341558d29c00da7503b2c5b9a415db8cc04a0006f198f30ec016d4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" podUID="40de8749-3b71-4a0d-8483-ef2512644475" Sep 29 09:56:11 crc kubenswrapper[4991]: E0929 09:56:11.411786 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.15:5001/openstack-k8s-operators/telemetry-operator:fb90e2d5d5450ef9908e16413d7df8331d5162ce\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" podUID="927b3fac-f2e5-4009-9991-14615d5d0cc7" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.417799 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" event={"ID":"51c758a6-817d-41d2-958b-2945bda7e082","Type":"ContainerStarted","Data":"226fb68e498f234dcc69b78e303a20435bcbe3973dc59481a10527ced5aef99d"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.419182 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.420719 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" event={"ID":"36d942bf-7f08-474b-9386-41b1a4d32e01","Type":"ContainerStarted","Data":"2c2f6925e2589d12986f7c5cb804d2aec438e5c594f7fc844d8449db94c89e6b"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.421363 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.423590 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" event={"ID":"26d1e560-b3f8-43ed-af68-4834bf60e6e3","Type":"ContainerStarted","Data":"8d62b7c85e736c19c736d526e208e6443ac02ed2681b7442944487dd20bd54b4"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.424211 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.425913 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" event={"ID":"5406253f-041d-4ec4-a710-809c3d267e52","Type":"ContainerStarted","Data":"149c7ab85d81f38a2e88e6531c47031f74acf9f6f5a657866a591d72b4da198c"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.426481 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.429498 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" event={"ID":"64c94e01-ecee-47b4-ab4b-182085a9dce5","Type":"ContainerStarted","Data":"56135a66ff4bc166651efa00e3f7dbc64c8841a5011935262062dc042f83baa9"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.430167 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.431919 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" event={"ID":"9bac430d-9a3c-42e4-8aba-d175875a29ac","Type":"ContainerStarted","Data":"5a94d035aea97589bb5e8b1f6098ae82d6f231bb8225c3aee95df4148f01702b"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.432510 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.434149 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" event={"ID":"76b136ef-8bd7-4695-807d-be77c22c87bd","Type":"ContainerStarted","Data":"745d1760c8b200e31df59fdbef22e0c81f005c3baf0c0bb1ff8dfd6f4f9a9101"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.434700 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.436884 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" event={"ID":"f0b63357-310d-4b63-9ba9-212c0f3c6dd4","Type":"ContainerStarted","Data":"35bbf940fd00c37aeee81b6c4445330b67160059e299426bda4aa98db79bd106"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.437020 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.440842 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" podStartSLOduration=8.377709893 podStartE2EDuration="36.440490577s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.807245798 +0000 UTC m=+1075.663173826" lastFinishedPulling="2025-09-29 09:56:07.870026482 +0000 UTC m=+1103.725954510" observedRunningTime="2025-09-29 09:56:12.437055817 +0000 UTC m=+1108.292983885" watchObservedRunningTime="2025-09-29 09:56:12.440490577 +0000 UTC m=+1108.296418625" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.443384 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" event={"ID":"9a454961-e67f-46ce-a5f0-f5bb3cff6b67","Type":"ContainerStarted","Data":"3577d07f50dfd8ee6998f3ec8ae5b2ac907a1e51807ba4f5e03dbfffbd838bcf"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.443428 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" event={"ID":"9a454961-e67f-46ce-a5f0-f5bb3cff6b67","Type":"ContainerStarted","Data":"9d761cf33a5415595a56e576b73f731c695fbb1913797777673a0074badff971"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.443507 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.447152 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" event={"ID":"2e6f46c0-8a81-4203-946a-9cc5d0217b02","Type":"ContainerStarted","Data":"8699eea890570f610013ffa89e3e02a9fe19ae1e01b3f951e3897b823e3d42fb"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.447488 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.449426 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" event={"ID":"b33b3e90-d2bc-4c24-bc6e-8398c15f597d","Type":"ContainerStarted","Data":"b50052aa818a70f99afcdc6150c1b744930fcd2071e362db5840d6e772453aba"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.450290 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.452909 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" event={"ID":"55a2358d-e838-4581-906b-ec7f1a3117bf","Type":"ContainerStarted","Data":"b21e3a423b212cd176ec8f646ee5a8bf4d62d06fa9bf7a895c798bc517941d16"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.453080 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.455207 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" event={"ID":"663f9d51-4014-4b2d-b44d-96dca010a1f4","Type":"ContainerStarted","Data":"3e21172e7679b19a03304d0686753f87265510cc41ca78af0a324b90fbb9fa2a"} Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.455681 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.465460 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" podStartSLOduration=7.848042683 podStartE2EDuration="37.465437622s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.808753115 +0000 UTC m=+1074.664681143" lastFinishedPulling="2025-09-29 09:56:08.426148054 +0000 UTC m=+1104.282076082" observedRunningTime="2025-09-29 09:56:12.460287187 +0000 UTC m=+1108.316215215" watchObservedRunningTime="2025-09-29 09:56:12.465437622 +0000 UTC m=+1108.321365650" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.490971 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" podStartSLOduration=7.383486152 podStartE2EDuration="37.490932362s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.318687144 +0000 UTC m=+1074.174615172" lastFinishedPulling="2025-09-29 09:56:08.426133334 +0000 UTC m=+1104.282061382" observedRunningTime="2025-09-29 09:56:12.488999212 +0000 UTC m=+1108.344927240" watchObservedRunningTime="2025-09-29 09:56:12.490932362 +0000 UTC m=+1108.346860390" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.513367 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" podStartSLOduration=8.450602717 podStartE2EDuration="36.513342781s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.807485794 +0000 UTC m=+1075.663413812" lastFinishedPulling="2025-09-29 09:56:07.870225848 +0000 UTC m=+1103.726153876" observedRunningTime="2025-09-29 09:56:12.508648438 +0000 UTC m=+1108.364576476" watchObservedRunningTime="2025-09-29 09:56:12.513342781 +0000 UTC m=+1108.369270819" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.530262 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" podStartSLOduration=2.670326204 podStartE2EDuration="36.530242585s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.225847076 +0000 UTC m=+1074.081775104" lastFinishedPulling="2025-09-29 09:56:12.085763457 +0000 UTC m=+1107.941691485" observedRunningTime="2025-09-29 09:56:12.524920855 +0000 UTC m=+1108.380848893" watchObservedRunningTime="2025-09-29 09:56:12.530242585 +0000 UTC m=+1108.386170613" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.545692 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" podStartSLOduration=7.929742192 podStartE2EDuration="36.54566729s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.806825997 +0000 UTC m=+1075.662754025" lastFinishedPulling="2025-09-29 09:56:08.422751095 +0000 UTC m=+1104.278679123" observedRunningTime="2025-09-29 09:56:12.541512631 +0000 UTC m=+1108.397440659" watchObservedRunningTime="2025-09-29 09:56:12.54566729 +0000 UTC m=+1108.401595318" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.573453 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" podStartSLOduration=7.384963577 podStartE2EDuration="36.57343748s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.959146227 +0000 UTC m=+1075.815074255" lastFinishedPulling="2025-09-29 09:56:09.14762013 +0000 UTC m=+1105.003548158" observedRunningTime="2025-09-29 09:56:12.570643087 +0000 UTC m=+1108.426571115" watchObservedRunningTime="2025-09-29 09:56:12.57343748 +0000 UTC m=+1108.429365508" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.595158 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" podStartSLOduration=6.545966128 podStartE2EDuration="37.59514092s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:37.37688236 +0000 UTC m=+1073.232810388" lastFinishedPulling="2025-09-29 09:56:08.426057152 +0000 UTC m=+1104.281985180" observedRunningTime="2025-09-29 09:56:12.587713765 +0000 UTC m=+1108.443641793" watchObservedRunningTime="2025-09-29 09:56:12.59514092 +0000 UTC m=+1108.451068938" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.620714 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" podStartSLOduration=7.499420927 podStartE2EDuration="36.620690782s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.790005165 +0000 UTC m=+1075.645933183" lastFinishedPulling="2025-09-29 09:56:08.911275 +0000 UTC m=+1104.767203038" observedRunningTime="2025-09-29 09:56:12.613572355 +0000 UTC m=+1108.469500373" watchObservedRunningTime="2025-09-29 09:56:12.620690782 +0000 UTC m=+1108.476618810" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.644695 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" podStartSLOduration=8.012606807 podStartE2EDuration="37.644678942s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.795096986 +0000 UTC m=+1074.651025014" lastFinishedPulling="2025-09-29 09:56:08.427169111 +0000 UTC m=+1104.283097149" observedRunningTime="2025-09-29 09:56:12.641038316 +0000 UTC m=+1108.496966364" watchObservedRunningTime="2025-09-29 09:56:12.644678942 +0000 UTC m=+1108.500606960" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.661577 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" podStartSLOduration=8.808763326 podStartE2EDuration="37.661559795s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.573410437 +0000 UTC m=+1075.429338465" lastFinishedPulling="2025-09-29 09:56:08.426206886 +0000 UTC m=+1104.282134934" observedRunningTime="2025-09-29 09:56:12.660294732 +0000 UTC m=+1108.516222750" watchObservedRunningTime="2025-09-29 09:56:12.661559795 +0000 UTC m=+1108.517487813" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.676056 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" podStartSLOduration=8.188430982 podStartE2EDuration="37.676038896s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.382652344 +0000 UTC m=+1074.238580372" lastFinishedPulling="2025-09-29 09:56:07.870260258 +0000 UTC m=+1103.726188286" observedRunningTime="2025-09-29 09:56:12.674452174 +0000 UTC m=+1108.530380202" watchObservedRunningTime="2025-09-29 09:56:12.676038896 +0000 UTC m=+1108.531966924" Sep 29 09:56:12 crc kubenswrapper[4991]: I0929 09:56:12.698336 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" podStartSLOduration=8.172738539 podStartE2EDuration="37.698317261s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.344454451 +0000 UTC m=+1074.200382479" lastFinishedPulling="2025-09-29 09:56:07.870033173 +0000 UTC m=+1103.725961201" observedRunningTime="2025-09-29 09:56:12.690113536 +0000 UTC m=+1108.546041564" watchObservedRunningTime="2025-09-29 09:56:12.698317261 +0000 UTC m=+1108.554245289" Sep 29 09:56:14 crc kubenswrapper[4991]: I0929 09:56:14.481022 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7bf498966c-bjf5h" Sep 29 09:56:14 crc kubenswrapper[4991]: I0929 09:56:14.481825 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-k2q5h" Sep 29 09:56:14 crc kubenswrapper[4991]: I0929 09:56:14.482286 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-8ff95898-2svcm" Sep 29 09:56:14 crc kubenswrapper[4991]: I0929 09:56:14.482920 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748c574d75-bjlxb" Sep 29 09:56:14 crc kubenswrapper[4991]: I0929 09:56:14.484802 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-f66b554c6-f94sw" Sep 29 09:56:16 crc kubenswrapper[4991]: I0929 09:56:16.353597 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67b5d44b7f-8969b" Sep 29 09:56:16 crc kubenswrapper[4991]: I0929 09:56:16.506611 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6495d75b5-dpcb9" Sep 29 09:56:16 crc kubenswrapper[4991]: I0929 09:56:16.709443 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54d766c9f9-57tlx" Sep 29 09:56:16 crc kubenswrapper[4991]: I0929 09:56:16.756302 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-9fc8d5567-6kksz" Sep 29 09:56:16 crc kubenswrapper[4991]: I0929 09:56:16.885906 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-774b97b48-bxq6l" Sep 29 09:56:17 crc kubenswrapper[4991]: I0929 09:56:17.286006 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-chqdw" Sep 29 09:56:17 crc kubenswrapper[4991]: I0929 09:56:17.324830 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-858cd69f49-lktsl" Sep 29 09:56:18 crc kubenswrapper[4991]: I0929 09:56:18.280547 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-s7cf4" Sep 29 09:56:22 crc kubenswrapper[4991]: I0929 09:56:22.561625 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" event={"ID":"794146c2-be69-4a63-a485-7435c27e9f14","Type":"ContainerStarted","Data":"55f3f2a90d83167c827a34946b46f856df840bd3b3af89664212710cbed80a8f"} Sep 29 09:56:22 crc kubenswrapper[4991]: I0929 09:56:22.589156 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-gsgkm" podStartSLOduration=4.5418705809999995 podStartE2EDuration="46.589136262s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.701315106 +0000 UTC m=+1075.557243134" lastFinishedPulling="2025-09-29 09:56:21.748580787 +0000 UTC m=+1117.604508815" observedRunningTime="2025-09-29 09:56:22.586198425 +0000 UTC m=+1118.442126473" watchObservedRunningTime="2025-09-29 09:56:22.589136262 +0000 UTC m=+1118.445064290" Sep 29 09:56:24 crc kubenswrapper[4991]: I0929 09:56:24.609301 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" event={"ID":"927b3fac-f2e5-4009-9991-14615d5d0cc7","Type":"ContainerStarted","Data":"c32bab7e0ef4a4468faa7acafde93ec5d73afb0a73c1ef8d28b51961833d5eb8"} Sep 29 09:56:24 crc kubenswrapper[4991]: I0929 09:56:24.609993 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:56:24 crc kubenswrapper[4991]: I0929 09:56:24.634298 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" podStartSLOduration=5.118940292 podStartE2EDuration="48.634281816s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.764413333 +0000 UTC m=+1075.620341361" lastFinishedPulling="2025-09-29 09:56:23.279754837 +0000 UTC m=+1119.135682885" observedRunningTime="2025-09-29 09:56:24.625565167 +0000 UTC m=+1120.481493195" watchObservedRunningTime="2025-09-29 09:56:24.634281816 +0000 UTC m=+1120.490209834" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.619375 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" event={"ID":"43697432-9291-473b-add7-576d1db29307","Type":"ContainerStarted","Data":"9124f11c334861cb40a9f6a03745fc879ace09bae408417fcf462af10450d790"} Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.619878 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.621239 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" event={"ID":"ffa64e52-6eef-46e8-a535-d9797274440b","Type":"ContainerStarted","Data":"aedc6e1111769bd82b1c69305e072fb498e377535fef1e53ed0c52ac095641a7"} Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.621443 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.622880 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" event={"ID":"9cdd1ad5-293a-456e-8313-e23a3140f8f5","Type":"ContainerStarted","Data":"8657537f240f04da52c06ac27d3dfc9e1259aa82566dfdf6be8f1bc27f264ef9"} Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.623076 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.625162 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" event={"ID":"40de8749-3b71-4a0d-8483-ef2512644475","Type":"ContainerStarted","Data":"78256caa0e009cc3b13ed97488d7ba55b1fd0696c5ffa42d6b4191f4115563ee"} Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.625766 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.627577 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" event={"ID":"269a0f66-05b9-4bda-9125-e13b1a9264dc","Type":"ContainerStarted","Data":"5fad9d15cedb83a5fff6e9052504232d06407439e5a44d2dc4bc7370189d0a5f"} Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.627819 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.641450 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" podStartSLOduration=4.929708339 podStartE2EDuration="49.641430748s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.812793853 +0000 UTC m=+1075.668721881" lastFinishedPulling="2025-09-29 09:56:24.524516262 +0000 UTC m=+1120.380444290" observedRunningTime="2025-09-29 09:56:25.635927633 +0000 UTC m=+1121.491855681" watchObservedRunningTime="2025-09-29 09:56:25.641430748 +0000 UTC m=+1121.497358786" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.672816 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" podStartSLOduration=3.676638139 podStartE2EDuration="50.672792702s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.398132981 +0000 UTC m=+1074.254060999" lastFinishedPulling="2025-09-29 09:56:25.394287534 +0000 UTC m=+1121.250215562" observedRunningTime="2025-09-29 09:56:25.664620187 +0000 UTC m=+1121.520548225" watchObservedRunningTime="2025-09-29 09:56:25.672792702 +0000 UTC m=+1121.528720730" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.690699 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" podStartSLOduration=3.851071551 podStartE2EDuration="49.690682482s" podCreationTimestamp="2025-09-29 09:55:36 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.809042402 +0000 UTC m=+1074.664970430" lastFinishedPulling="2025-09-29 09:56:24.648653333 +0000 UTC m=+1120.504581361" observedRunningTime="2025-09-29 09:56:25.683528404 +0000 UTC m=+1121.539456442" watchObservedRunningTime="2025-09-29 09:56:25.690682482 +0000 UTC m=+1121.546610510" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.703122 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" podStartSLOduration=6.035049567 podStartE2EDuration="50.703089318s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:39.857315433 +0000 UTC m=+1075.713243461" lastFinishedPulling="2025-09-29 09:56:24.525355184 +0000 UTC m=+1120.381283212" observedRunningTime="2025-09-29 09:56:25.69975826 +0000 UTC m=+1121.555686288" watchObservedRunningTime="2025-09-29 09:56:25.703089318 +0000 UTC m=+1121.559017346" Sep 29 09:56:25 crc kubenswrapper[4991]: I0929 09:56:25.715609 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" podStartSLOduration=3.951138987 podStartE2EDuration="50.715586146s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:38.668799059 +0000 UTC m=+1074.524727087" lastFinishedPulling="2025-09-29 09:56:25.433246218 +0000 UTC m=+1121.289174246" observedRunningTime="2025-09-29 09:56:25.713970354 +0000 UTC m=+1121.569898382" watchObservedRunningTime="2025-09-29 09:56:25.715586146 +0000 UTC m=+1121.571514174" Sep 29 09:56:26 crc kubenswrapper[4991]: I0929 09:56:26.630137 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-gxh2d" Sep 29 09:56:26 crc kubenswrapper[4991]: I0929 09:56:26.640737 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" event={"ID":"861ab488-585c-407a-bfe6-97e8d01f20e6","Type":"ContainerStarted","Data":"69a8086b05bb5bc15d881be9f8c298527c5084dee0de9a399942214d63ce8c80"} Sep 29 09:56:26 crc kubenswrapper[4991]: I0929 09:56:26.669422 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" podStartSLOduration=2.845240763 podStartE2EDuration="51.669395097s" podCreationTimestamp="2025-09-29 09:55:35 +0000 UTC" firstStartedPulling="2025-09-29 09:55:37.629230737 +0000 UTC m=+1073.485158775" lastFinishedPulling="2025-09-29 09:56:26.453385081 +0000 UTC m=+1122.309313109" observedRunningTime="2025-09-29 09:56:26.662794423 +0000 UTC m=+1122.518722471" watchObservedRunningTime="2025-09-29 09:56:26.669395097 +0000 UTC m=+1122.525323135" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.240769 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.242730 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d74f4d695-c74zk" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.439789 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-695847bc78-v4kx7" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.642940 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-56cf9c6b99-vvlp2" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.658328 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-687b9cf756-tldms" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.658452 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-2bddq" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.851143 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5f95c46c78-54kjl" Sep 29 09:56:36 crc kubenswrapper[4991]: I0929 09:56:36.896464 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-889c789f9-9lbgt" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.672144 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.674186 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.677056 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.677220 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.678530 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.681720 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-srv8g" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.695662 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.734807 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.737205 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.745303 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.745627 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.776435 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.776523 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jqs\" (UniqueName: \"kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.877992 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.878053 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jqs\" (UniqueName: \"kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.878160 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5tmr\" (UniqueName: \"kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.878188 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.878220 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.878926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.902995 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jqs\" (UniqueName: \"kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs\") pod \"dnsmasq-dns-675f4bcbfc-x9j8q\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.980282 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.980572 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5tmr\" (UniqueName: \"kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.980639 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.981923 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.982260 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:52 crc kubenswrapper[4991]: I0929 09:56:52.992808 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.002842 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5tmr\" (UniqueName: \"kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr\") pod \"dnsmasq-dns-78dd6ddcc-47csv\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.061816 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.670217 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.810788 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:56:53 crc kubenswrapper[4991]: W0929 09:56:53.818365 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2fb779_ab2d_4402_a435_ecfd34031058.slice/crio-03b8765614dd75d26b2691c62f14bad952ce8ad9c7fef2be9651dbaf56e5bcb4 WatchSource:0}: Error finding container 03b8765614dd75d26b2691c62f14bad952ce8ad9c7fef2be9651dbaf56e5bcb4: Status 404 returned error can't find the container with id 03b8765614dd75d26b2691c62f14bad952ce8ad9c7fef2be9651dbaf56e5bcb4 Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.919644 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" event={"ID":"cf2fb779-ab2d-4402-a435-ecfd34031058","Type":"ContainerStarted","Data":"03b8765614dd75d26b2691c62f14bad952ce8ad9c7fef2be9651dbaf56e5bcb4"} Sep 29 09:56:53 crc kubenswrapper[4991]: I0929 09:56:53.922211 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" event={"ID":"e3b77903-4e96-4c69-9568-19028195e414","Type":"ContainerStarted","Data":"4260c1832437e5b01ede32adeab47ae0a969cdcfb3965cc45a2e481d648a510d"} Sep 29 09:56:55 crc kubenswrapper[4991]: I0929 09:56:55.877588 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:56:55 crc kubenswrapper[4991]: I0929 09:56:55.915245 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:56:55 crc kubenswrapper[4991]: I0929 09:56:55.917332 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:55 crc kubenswrapper[4991]: I0929 09:56:55.926929 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.046562 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.046638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcxr\" (UniqueName: \"kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.046792 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.148142 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.148287 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.148317 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdcxr\" (UniqueName: \"kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.149561 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.149655 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.182773 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdcxr\" (UniqueName: \"kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr\") pod \"dnsmasq-dns-666b6646f7-dfx6g\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.209606 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.252310 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.253791 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.261761 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.302387 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.351199 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.351712 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.351859 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59cp\" (UniqueName: \"kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.454925 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59cp\" (UniqueName: \"kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.455010 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.455077 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.456065 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.456479 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.490716 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59cp\" (UniqueName: \"kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp\") pod \"dnsmasq-dns-57d769cc4f-9tt7r\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.590650 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.876690 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:56:56 crc kubenswrapper[4991]: W0929 09:56:56.885216 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bf3b6e_0955_4b0d_a3be_ba817af85588.slice/crio-017b8f533cfb9e185d79365d535510d548c491986e997fa4bbc443959ce23063 WatchSource:0}: Error finding container 017b8f533cfb9e185d79365d535510d548c491986e997fa4bbc443959ce23063: Status 404 returned error can't find the container with id 017b8f533cfb9e185d79365d535510d548c491986e997fa4bbc443959ce23063 Sep 29 09:56:56 crc kubenswrapper[4991]: I0929 09:56:56.966862 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" event={"ID":"29bf3b6e-0955-4b0d-a3be-ba817af85588","Type":"ContainerStarted","Data":"017b8f533cfb9e185d79365d535510d548c491986e997fa4bbc443959ce23063"} Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.061756 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.063431 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.066811 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.098594 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.098715 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.098782 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.098975 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.099090 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6lzpf" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.099179 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.098867 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.155758 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199340 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199382 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199403 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199462 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqld\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199482 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199559 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199576 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199608 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.199725 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301184 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301259 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301280 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301299 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301327 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skqld\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301347 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301376 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301398 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301428 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301452 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.301479 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.302238 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.302249 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.302421 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.302624 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.303133 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.305309 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.307454 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.307462 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.308792 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.313681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.332828 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skqld\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.380737 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.401165 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.406384 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.409045 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.412528 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.412542 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.412648 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.412708 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.412966 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.413063 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nph8c" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.416882 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.425645 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509056 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509363 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509403 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509569 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509615 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqn7\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509745 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.509911 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.510142 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.510230 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.510275 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.510299 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612159 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612223 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612244 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612275 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612336 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612374 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612424 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612457 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612476 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqn7\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612513 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.612554 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.613243 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.613584 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.614269 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.614926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.615239 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.615505 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.618368 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.620729 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.623554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.627016 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.639168 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqn7\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.681885 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.762212 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.894837 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 09:56:57 crc kubenswrapper[4991]: I0929 09:56:57.975685 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" event={"ID":"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c","Type":"ContainerStarted","Data":"27d7e25fabe36e2232719052fdca73c31285a049e6132799c1bc9bc666e8bdb7"} Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.469027 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.470988 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.475804 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.476102 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.476458 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hsw7n" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.476837 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.477585 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.488017 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.488028 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567143 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567199 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567239 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6vwj\" (UniqueName: \"kubernetes.io/projected/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kube-api-access-h6vwj\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567323 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-secrets\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567549 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567635 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567703 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567765 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.567856 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669619 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-secrets\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669701 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669729 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669749 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669772 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669799 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669834 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669857 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.669887 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6vwj\" (UniqueName: \"kubernetes.io/projected/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kube-api-access-h6vwj\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.670865 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.671600 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.671628 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.672294 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.673574 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.677599 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.680714 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.687735 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6vwj\" (UniqueName: \"kubernetes.io/projected/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-kube-api-access-h6vwj\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.688467 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/c4a8f2f9-4e24-4728-bf11-a0b6f50094d1-secrets\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.698069 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1\") " pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.834853 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.867332 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.872418 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.874661 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.876264 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.876441 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.880267 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2zmq2" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.880479 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989182 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989253 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989303 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grnd\" (UniqueName: \"kubernetes.io/projected/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kube-api-access-8grnd\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989452 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989654 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989716 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989741 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.989787 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:56:59 crc kubenswrapper[4991]: I0929 09:56:59.990940 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.028703 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.030868 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.038011 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.038067 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.039299 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zgzsg" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.061417 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093309 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093352 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093383 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093412 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-config-data\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093436 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093514 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093537 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093565 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grnd\" (UniqueName: \"kubernetes.io/projected/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kube-api-access-8grnd\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093608 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q992b\" (UniqueName: \"kubernetes.io/projected/789431f4-5a0e-4baf-a88e-de5af2905c04-kube-api-access-q992b\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093655 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-kolla-config\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093672 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093697 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.093730 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.094121 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.095548 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.095926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.096497 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.096624 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.097459 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.098938 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.106440 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.131736 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grnd\" (UniqueName: \"kubernetes.io/projected/6920a7a6-0725-4512-8b3c-dcf7ba2c8587-kube-api-access-8grnd\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.143522 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6920a7a6-0725-4512-8b3c-dcf7ba2c8587\") " pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.197094 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-config-data\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.197232 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q992b\" (UniqueName: \"kubernetes.io/projected/789431f4-5a0e-4baf-a88e-de5af2905c04-kube-api-access-q992b\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.197275 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-kolla-config\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.197297 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.197327 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.198273 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-config-data\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.198466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/789431f4-5a0e-4baf-a88e-de5af2905c04-kolla-config\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.202880 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.204045 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789431f4-5a0e-4baf-a88e-de5af2905c04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.217960 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.218631 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q992b\" (UniqueName: \"kubernetes.io/projected/789431f4-5a0e-4baf-a88e-de5af2905c04-kube-api-access-q992b\") pod \"memcached-0\" (UID: \"789431f4-5a0e-4baf-a88e-de5af2905c04\") " pod="openstack/memcached-0" Sep 29 09:57:00 crc kubenswrapper[4991]: I0929 09:57:00.349268 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.273778 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.278787 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.281162 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wz2wx" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.291632 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.342263 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hd9\" (UniqueName: \"kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9\") pod \"kube-state-metrics-0\" (UID: \"3c45d26d-026d-4f19-88bb-6d1769f32363\") " pod="openstack/kube-state-metrics-0" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.447704 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hd9\" (UniqueName: \"kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9\") pod \"kube-state-metrics-0\" (UID: \"3c45d26d-026d-4f19-88bb-6d1769f32363\") " pod="openstack/kube-state-metrics-0" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.523882 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hd9\" (UniqueName: \"kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9\") pod \"kube-state-metrics-0\" (UID: \"3c45d26d-026d-4f19-88bb-6d1769f32363\") " pod="openstack/kube-state-metrics-0" Sep 29 09:57:02 crc kubenswrapper[4991]: I0929 09:57:02.610318 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.054543 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.057095 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.059457 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.059640 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-7wqcf" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.099533 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.186588 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdfv\" (UniqueName: \"kubernetes.io/projected/3f0bf97a-8ce8-4c0c-b980-34b76047840f-kube-api-access-kxdfv\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.186691 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0bf97a-8ce8-4c0c-b980-34b76047840f-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.288172 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0bf97a-8ce8-4c0c-b980-34b76047840f-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.288312 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdfv\" (UniqueName: \"kubernetes.io/projected/3f0bf97a-8ce8-4c0c-b980-34b76047840f-kube-api-access-kxdfv\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.298057 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0bf97a-8ce8-4c0c-b980-34b76047840f-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.331597 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdfv\" (UniqueName: \"kubernetes.io/projected/3f0bf97a-8ce8-4c0c-b980-34b76047840f-kube-api-access-kxdfv\") pod \"observability-ui-dashboards-6584dc9448-s8m2k\" (UID: \"3f0bf97a-8ce8-4c0c-b980-34b76047840f\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.420369 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.520542 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-ccb475956-lg8lb"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.522305 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.548231 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ccb475956-lg8lb"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.613502 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.618762 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.622440 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.622665 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.623024 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.623086 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.623246 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4m5ct" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.637350 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.639285 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.697841 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-oauth-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.697920 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-service-ca\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.698014 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xfg\" (UniqueName: \"kubernetes.io/projected/1a845a00-a346-4871-872d-d336534c049a-kube-api-access-b4xfg\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.698081 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-oauth-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.698108 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-console-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.698142 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.698158 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-trusted-ca-bundle\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.800450 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.800528 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.800707 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-oauth-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.800991 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-console-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801125 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801182 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801209 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-trusted-ca-bundle\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801236 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801266 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-oauth-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801293 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801320 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801342 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmq6v\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801426 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-service-ca\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801556 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xfg\" (UniqueName: \"kubernetes.io/projected/1a845a00-a346-4871-872d-d336534c049a-kube-api-access-b4xfg\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.801612 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.804191 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-console-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.805503 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-oauth-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.806467 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-oauth-config\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.806721 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-trusted-ca-bundle\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.806926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a845a00-a346-4871-872d-d336534c049a-console-serving-cert\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.808497 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a845a00-a346-4871-872d-d336534c049a-service-ca\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.822288 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xfg\" (UniqueName: \"kubernetes.io/projected/1a845a00-a346-4871-872d-d336534c049a-kube-api-access-b4xfg\") pod \"console-ccb475956-lg8lb\" (UID: \"1a845a00-a346-4871-872d-d336534c049a\") " pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.860281 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.904233 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.904291 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.904314 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.905025 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.905124 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.905145 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.905164 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.905180 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmq6v\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.909481 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917165 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917329 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917459 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917694 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917715 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.917734 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e60eb26f743f9cd4188eb4a778b023ea193a20b4b8b4bc41849015e0033fa4b7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.919397 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.927886 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmq6v\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:03 crc kubenswrapper[4991]: I0929 09:57:03.963342 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:04 crc kubenswrapper[4991]: I0929 09:57:04.113672 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerStarted","Data":"1539f70c79ee412c45b7cb56662426715244ad6fcbddfdd4b62452fbe9d25f88"} Sep 29 09:57:04 crc kubenswrapper[4991]: I0929 09:57:04.243273 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.432170 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zmljr"] Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.433843 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.438394 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.438679 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xn65p" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.438759 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.442861 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rr8td"] Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.445764 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.455612 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr"] Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.491564 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rr8td"] Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.573301 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-run\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.573489 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70600eb-2ef3-4db2-a1e6-050e76f25e79-scripts\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574208 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-etc-ovs\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574268 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-scripts\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574348 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-ovn-controller-tls-certs\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574419 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-combined-ca-bundle\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574454 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkxq\" (UniqueName: \"kubernetes.io/projected/d70600eb-2ef3-4db2-a1e6-050e76f25e79-kube-api-access-7lkxq\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574471 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthlk\" (UniqueName: \"kubernetes.io/projected/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-kube-api-access-sthlk\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574600 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-log-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574696 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-lib\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574764 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.574841 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-log\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676093 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-log-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676143 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676179 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-lib\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676237 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676263 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-log\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676297 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-run\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676320 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70600eb-2ef3-4db2-a1e6-050e76f25e79-scripts\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676352 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-etc-ovs\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676374 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-scripts\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676410 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-ovn-controller-tls-certs\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676435 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-combined-ca-bundle\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676456 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkxq\" (UniqueName: \"kubernetes.io/projected/d70600eb-2ef3-4db2-a1e6-050e76f25e79-kube-api-access-7lkxq\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676471 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthlk\" (UniqueName: \"kubernetes.io/projected/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-kube-api-access-sthlk\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676684 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-log-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.676775 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run-ovn\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.677065 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-etc-ovs\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.682371 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-ovn-controller-tls-certs\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.683918 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d70600eb-2ef3-4db2-a1e6-050e76f25e79-var-run\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.684148 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-run\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.684283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-lib\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.684407 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-var-log\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.686128 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-scripts\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.687190 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70600eb-2ef3-4db2-a1e6-050e76f25e79-scripts\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.699350 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthlk\" (UniqueName: \"kubernetes.io/projected/11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44-kube-api-access-sthlk\") pod \"ovn-controller-ovs-rr8td\" (UID: \"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44\") " pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.713451 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70600eb-2ef3-4db2-a1e6-050e76f25e79-combined-ca-bundle\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.725233 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkxq\" (UniqueName: \"kubernetes.io/projected/d70600eb-2ef3-4db2-a1e6-050e76f25e79-kube-api-access-7lkxq\") pod \"ovn-controller-zmljr\" (UID: \"d70600eb-2ef3-4db2-a1e6-050e76f25e79\") " pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.765859 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr" Sep 29 09:57:05 crc kubenswrapper[4991]: I0929 09:57:05.781158 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.891750 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.897361 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.901672 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.902171 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.902239 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.902375 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nsnrp" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.902964 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Sep 29 09:57:06 crc kubenswrapper[4991]: I0929 09:57:06.924689 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001358 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001418 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001457 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62kx\" (UniqueName: \"kubernetes.io/projected/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-kube-api-access-c62kx\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001499 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001553 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001593 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001637 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.001671 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.103890 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.103981 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104067 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62kx\" (UniqueName: \"kubernetes.io/projected/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-kube-api-access-c62kx\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104135 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104257 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104334 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104404 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.104461 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.105401 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.105810 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.106615 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.106925 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.109707 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.109926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.122609 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.130223 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62kx\" (UniqueName: \"kubernetes.io/projected/b4834ad3-a9f2-47e2-b3ec-6cac39b88fef-kube-api-access-c62kx\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.150612 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef\") " pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:07 crc kubenswrapper[4991]: I0929 09:57:07.226574 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:08 crc kubenswrapper[4991]: I0929 09:57:08.025447 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.663883 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.667703 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.683580 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.683810 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.686616 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.687084 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hbvkn" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.695930 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757062 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-config\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757099 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757262 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhq7\" (UniqueName: \"kubernetes.io/projected/463a171c-5755-4898-a973-f72db6a319f0-kube-api-access-kkhq7\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757346 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757401 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757506 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757525 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.757551 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/463a171c-5755-4898-a973-f72db6a319f0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859404 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhq7\" (UniqueName: \"kubernetes.io/projected/463a171c-5755-4898-a973-f72db6a319f0-kube-api-access-kkhq7\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859486 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859587 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859712 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859733 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859767 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/463a171c-5755-4898-a973-f72db6a319f0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859814 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-config\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.859835 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.860512 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.860679 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/463a171c-5755-4898-a973-f72db6a319f0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.861325 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-config\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.861334 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/463a171c-5755-4898-a973-f72db6a319f0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.869086 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.869595 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.869721 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/463a171c-5755-4898-a973-f72db6a319f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.876291 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhq7\" (UniqueName: \"kubernetes.io/projected/463a171c-5755-4898-a973-f72db6a319f0-kube-api-access-kkhq7\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:09 crc kubenswrapper[4991]: I0929 09:57:09.890750 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"463a171c-5755-4898-a973-f72db6a319f0\") " pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:10 crc kubenswrapper[4991]: I0929 09:57:10.026939 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.620679 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.621415 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4jqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-x9j8q_openstack(e3b77903-4e96-4c69-9568-19028195e414): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.622733 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" podUID="e3b77903-4e96-4c69-9568-19028195e414" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.626292 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.626556 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdcxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-dfx6g_openstack(29bf3b6e-0955-4b0d-a3be-ba817af85588): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.627816 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" podUID="29bf3b6e-0955-4b0d-a3be-ba817af85588" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.630198 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.630336 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5tmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-47csv_openstack(cf2fb779-ab2d-4402-a435-ecfd34031058): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:57:13 crc kubenswrapper[4991]: E0929 09:57:13.631518 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" podUID="cf2fb779-ab2d-4402-a435-ecfd34031058" Sep 29 09:57:14 crc kubenswrapper[4991]: I0929 09:57:14.241574 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6920a7a6-0725-4512-8b3c-dcf7ba2c8587","Type":"ContainerStarted","Data":"88e936a21df7adae6f80df1ba46f30dfb5ff4d6a23cc586e69c79732c71924a0"} Sep 29 09:57:14 crc kubenswrapper[4991]: E0929 09:57:14.245380 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" podUID="29bf3b6e-0955-4b0d-a3be-ba817af85588" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.007317 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.011257 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.073146 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config\") pod \"cf2fb779-ab2d-4402-a435-ecfd34031058\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.073564 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config\") pod \"e3b77903-4e96-4c69-9568-19028195e414\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.073615 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc\") pod \"cf2fb779-ab2d-4402-a435-ecfd34031058\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.073734 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5tmr\" (UniqueName: \"kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr\") pod \"cf2fb779-ab2d-4402-a435-ecfd34031058\" (UID: \"cf2fb779-ab2d-4402-a435-ecfd34031058\") " Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.074190 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jqs\" (UniqueName: \"kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs\") pod \"e3b77903-4e96-4c69-9568-19028195e414\" (UID: \"e3b77903-4e96-4c69-9568-19028195e414\") " Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.074408 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf2fb779-ab2d-4402-a435-ecfd34031058" (UID: "cf2fb779-ab2d-4402-a435-ecfd34031058"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.074825 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.075183 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config" (OuterVolumeSpecName: "config") pod "e3b77903-4e96-4c69-9568-19028195e414" (UID: "e3b77903-4e96-4c69-9568-19028195e414"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.075520 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config" (OuterVolumeSpecName: "config") pod "cf2fb779-ab2d-4402-a435-ecfd34031058" (UID: "cf2fb779-ab2d-4402-a435-ecfd34031058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.084991 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr" (OuterVolumeSpecName: "kube-api-access-v5tmr") pod "cf2fb779-ab2d-4402-a435-ecfd34031058" (UID: "cf2fb779-ab2d-4402-a435-ecfd34031058"). InnerVolumeSpecName "kube-api-access-v5tmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.086600 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs" (OuterVolumeSpecName: "kube-api-access-h4jqs") pod "e3b77903-4e96-4c69-9568-19028195e414" (UID: "e3b77903-4e96-4c69-9568-19028195e414"). InnerVolumeSpecName "kube-api-access-h4jqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.177509 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jqs\" (UniqueName: \"kubernetes.io/projected/e3b77903-4e96-4c69-9568-19028195e414-kube-api-access-h4jqs\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.177565 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2fb779-ab2d-4402-a435-ecfd34031058-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.177579 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b77903-4e96-4c69-9568-19028195e414-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.177610 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5tmr\" (UniqueName: \"kubernetes.io/projected/cf2fb779-ab2d-4402-a435-ecfd34031058-kube-api-access-v5tmr\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.258295 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" event={"ID":"cf2fb779-ab2d-4402-a435-ecfd34031058","Type":"ContainerDied","Data":"03b8765614dd75d26b2691c62f14bad952ce8ad9c7fef2be9651dbaf56e5bcb4"} Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.258400 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-47csv" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.268168 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" event={"ID":"e3b77903-4e96-4c69-9568-19028195e414","Type":"ContainerDied","Data":"4260c1832437e5b01ede32adeab47ae0a969cdcfb3965cc45a2e481d648a510d"} Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.268268 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-x9j8q" Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.327413 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.335625 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-x9j8q"] Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.367253 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.374897 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-47csv"] Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.876505 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 09:57:15 crc kubenswrapper[4991]: I0929 09:57:15.886776 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 09:57:15 crc kubenswrapper[4991]: W0929 09:57:15.898254 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c45d26d_026d_4f19_88bb_6d1769f32363.slice/crio-c2d21888ac18b9d8af9741c0dcdce2658d3f79ccf1ac0568a51eebed3a361bcb WatchSource:0}: Error finding container c2d21888ac18b9d8af9741c0dcdce2658d3f79ccf1ac0568a51eebed3a361bcb: Status 404 returned error can't find the container with id c2d21888ac18b9d8af9741c0dcdce2658d3f79ccf1ac0568a51eebed3a361bcb Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.060900 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.073240 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 29 09:57:16 crc kubenswrapper[4991]: W0929 09:57:16.079328 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod789431f4_5a0e_4baf_a88e_de5af2905c04.slice/crio-c4537e1675262c144cc1986c6eaf69576ce57c323ba7f1e8134ca92e8ef906fc WatchSource:0}: Error finding container c4537e1675262c144cc1986c6eaf69576ce57c323ba7f1e8134ca92e8ef906fc: Status 404 returned error can't find the container with id c4537e1675262c144cc1986c6eaf69576ce57c323ba7f1e8134ca92e8ef906fc Sep 29 09:57:16 crc kubenswrapper[4991]: W0929 09:57:16.081789 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4a8f2f9_4e24_4728_bf11_a0b6f50094d1.slice/crio-70cee955acd6cb0938aa36feca3d26234e428ee8a8048c51609277911b488b3a WatchSource:0}: Error finding container 70cee955acd6cb0938aa36feca3d26234e428ee8a8048c51609277911b488b3a: Status 404 returned error can't find the container with id 70cee955acd6cb0938aa36feca3d26234e428ee8a8048c51609277911b488b3a Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.280983 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerStarted","Data":"7910fd79a1c65359a5d7994d2bfbc4276d57b7a7ff1a657124cf305e57695eac"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.284449 4991 generic.go:334] "Generic (PLEG): container finished" podID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerID="298cb352ec916c74ded5d9a2041c177a7c818c26e7fd91a1b7a389e468c6d334" exitCode=0 Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.284525 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" event={"ID":"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c","Type":"ContainerDied","Data":"298cb352ec916c74ded5d9a2041c177a7c818c26e7fd91a1b7a389e468c6d334"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.286236 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c45d26d-026d-4f19-88bb-6d1769f32363","Type":"ContainerStarted","Data":"c2d21888ac18b9d8af9741c0dcdce2658d3f79ccf1ac0568a51eebed3a361bcb"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.289130 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerStarted","Data":"ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.289183 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerStarted","Data":"332cbbf011fc1182d497b8480f43fc5d156983bda64a52f16bfd8f7fff0d7ef1"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.298260 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1","Type":"ContainerStarted","Data":"70cee955acd6cb0938aa36feca3d26234e428ee8a8048c51609277911b488b3a"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.302435 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.305052 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"789431f4-5a0e-4baf-a88e-de5af2905c04","Type":"ContainerStarted","Data":"c4537e1675262c144cc1986c6eaf69576ce57c323ba7f1e8134ca92e8ef906fc"} Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.391718 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-ccb475956-lg8lb"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.437811 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 29 09:57:16 crc kubenswrapper[4991]: W0929 09:57:16.450069 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4834ad3_a9f2_47e2_b3ec_6cac39b88fef.slice/crio-9b13080427184ec73504d306d0a8a419889aca857a37f5573988a2b5bac71d76 WatchSource:0}: Error finding container 9b13080427184ec73504d306d0a8a419889aca857a37f5573988a2b5bac71d76: Status 404 returned error can't find the container with id 9b13080427184ec73504d306d0a8a419889aca857a37f5573988a2b5bac71d76 Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.509050 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rr8td"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.623290 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.631319 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k"] Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.641390 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:57:16 crc kubenswrapper[4991]: W0929 09:57:16.865118 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd70600eb_2ef3_4db2_a1e6_050e76f25e79.slice/crio-90031fa06787bd7cfe2cd799a5435b34a9ff13b2ed42a5f3729af2aefdd9f8dd WatchSource:0}: Error finding container 90031fa06787bd7cfe2cd799a5435b34a9ff13b2ed42a5f3729af2aefdd9f8dd: Status 404 returned error can't find the container with id 90031fa06787bd7cfe2cd799a5435b34a9ff13b2ed42a5f3729af2aefdd9f8dd Sep 29 09:57:16 crc kubenswrapper[4991]: W0929 09:57:16.878985 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34c9dcd_aa7a_4cf8_9295_fb6d85190123.slice/crio-ae085f0820edb853ce053b88c170a1d847abd48a44efc4feed1011b3a0a4de49 WatchSource:0}: Error finding container ae085f0820edb853ce053b88c170a1d847abd48a44efc4feed1011b3a0a4de49: Status 404 returned error can't find the container with id ae085f0820edb853ce053b88c170a1d847abd48a44efc4feed1011b3a0a4de49 Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.939690 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2fb779-ab2d-4402-a435-ecfd34031058" path="/var/lib/kubelet/pods/cf2fb779-ab2d-4402-a435-ecfd34031058/volumes" Sep 29 09:57:16 crc kubenswrapper[4991]: I0929 09:57:16.940201 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b77903-4e96-4c69-9568-19028195e414" path="/var/lib/kubelet/pods/e3b77903-4e96-4c69-9568-19028195e414/volumes" Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.366127 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rr8td" event={"ID":"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44","Type":"ContainerStarted","Data":"272f836817306caaa03aa05efadc09eb4b427e55b261a34819b6710b34b797e5"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.379911 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ccb475956-lg8lb" event={"ID":"1a845a00-a346-4871-872d-d336534c049a","Type":"ContainerStarted","Data":"02d3485cb6acc6bc4ca8ddbe5d8e2ecf6ab9bc5a96f691e844fe12b9505c741e"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.380085 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-ccb475956-lg8lb" event={"ID":"1a845a00-a346-4871-872d-d336534c049a","Type":"ContainerStarted","Data":"684d322419160b886b97742c0a7c0025c2aad42456cf7f233d367db4dfe44d26"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.405041 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"463a171c-5755-4898-a973-f72db6a319f0","Type":"ContainerStarted","Data":"e3befdb25e1760ebf262ac45c94130b1966cb8cc7de37ffa8f605b2a9491b0f2"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.413989 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-ccb475956-lg8lb" podStartSLOduration=14.413971641 podStartE2EDuration="14.413971641s" podCreationTimestamp="2025-09-29 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:17.409461932 +0000 UTC m=+1173.265389970" watchObservedRunningTime="2025-09-29 09:57:17.413971641 +0000 UTC m=+1173.269899669" Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.416840 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" event={"ID":"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c","Type":"ContainerStarted","Data":"1e3cf3f3154016fade1fd3ed4636d9c039b484b378cef310f70e9738d2c558ca"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.417145 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.426178 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerStarted","Data":"ae085f0820edb853ce053b88c170a1d847abd48a44efc4feed1011b3a0a4de49"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.435025 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr" event={"ID":"d70600eb-2ef3-4db2-a1e6-050e76f25e79","Type":"ContainerStarted","Data":"90031fa06787bd7cfe2cd799a5435b34a9ff13b2ed42a5f3729af2aefdd9f8dd"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.442322 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" podStartSLOduration=3.619558249 podStartE2EDuration="21.442300795s" podCreationTimestamp="2025-09-29 09:56:56 +0000 UTC" firstStartedPulling="2025-09-29 09:56:57.163270263 +0000 UTC m=+1153.019198291" lastFinishedPulling="2025-09-29 09:57:14.986012809 +0000 UTC m=+1170.841940837" observedRunningTime="2025-09-29 09:57:17.437586061 +0000 UTC m=+1173.293514089" watchObservedRunningTime="2025-09-29 09:57:17.442300795 +0000 UTC m=+1173.298228823" Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.451208 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" event={"ID":"3f0bf97a-8ce8-4c0c-b980-34b76047840f","Type":"ContainerStarted","Data":"dd8a2d81418ab3672cf21be7f2fb5c66e67e091f6fffd4573056a091712d5e51"} Sep 29 09:57:17 crc kubenswrapper[4991]: I0929 09:57:17.464849 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef","Type":"ContainerStarted","Data":"9b13080427184ec73504d306d0a8a419889aca857a37f5573988a2b5bac71d76"} Sep 29 09:57:21 crc kubenswrapper[4991]: I0929 09:57:21.593424 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:57:21 crc kubenswrapper[4991]: I0929 09:57:21.648626 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.136593 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.284702 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config\") pod \"29bf3b6e-0955-4b0d-a3be-ba817af85588\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.284898 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc\") pod \"29bf3b6e-0955-4b0d-a3be-ba817af85588\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.284919 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdcxr\" (UniqueName: \"kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr\") pod \"29bf3b6e-0955-4b0d-a3be-ba817af85588\" (UID: \"29bf3b6e-0955-4b0d-a3be-ba817af85588\") " Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.285368 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29bf3b6e-0955-4b0d-a3be-ba817af85588" (UID: "29bf3b6e-0955-4b0d-a3be-ba817af85588"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.285364 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config" (OuterVolumeSpecName: "config") pod "29bf3b6e-0955-4b0d-a3be-ba817af85588" (UID: "29bf3b6e-0955-4b0d-a3be-ba817af85588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.289201 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr" (OuterVolumeSpecName: "kube-api-access-hdcxr") pod "29bf3b6e-0955-4b0d-a3be-ba817af85588" (UID: "29bf3b6e-0955-4b0d-a3be-ba817af85588"). InnerVolumeSpecName "kube-api-access-hdcxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.388128 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.388168 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdcxr\" (UniqueName: \"kubernetes.io/projected/29bf3b6e-0955-4b0d-a3be-ba817af85588-kube-api-access-hdcxr\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.388183 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29bf3b6e-0955-4b0d-a3be-ba817af85588-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.527676 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" event={"ID":"29bf3b6e-0955-4b0d-a3be-ba817af85588","Type":"ContainerDied","Data":"017b8f533cfb9e185d79365d535510d548c491986e997fa4bbc443959ce23063"} Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.528042 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dfx6g" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.609995 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.618668 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dfx6g"] Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.861056 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.861111 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:23 crc kubenswrapper[4991]: I0929 09:57:23.866018 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.540205 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"463a171c-5755-4898-a973-f72db6a319f0","Type":"ContainerStarted","Data":"a225def92625e05f0bb4e29e11575f676b9cee0dbbf51cf268091eedb14f8675"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.542368 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6920a7a6-0725-4512-8b3c-dcf7ba2c8587","Type":"ContainerStarted","Data":"1e593838bc1dca6335719b3c77d1da6d91e1d80047fff396b08c8aa63267bcb6"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.543927 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef","Type":"ContainerStarted","Data":"34081dc4684c30e897780729c8b6c571f680dbe59ddb69f070a11dc5d6f3e81d"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.545263 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rr8td" event={"ID":"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44","Type":"ContainerStarted","Data":"e0d1e802fe2e27f75170e813c6b4352d83308aab467aa48d496f66b0a34d9b50"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.546681 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"789431f4-5a0e-4baf-a88e-de5af2905c04","Type":"ContainerStarted","Data":"fef785aae905ab2ad18912b02f246193bd38bb41508dcc0aad52d8ce8ff089d4"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.546989 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.548829 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c45d26d-026d-4f19-88bb-6d1769f32363","Type":"ContainerStarted","Data":"87394e143dd7a24f7dc7cd6be5e439d335ea31070a16464add70e47e5effa13a"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.548883 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.551040 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr" event={"ID":"d70600eb-2ef3-4db2-a1e6-050e76f25e79","Type":"ContainerStarted","Data":"585b471e9883420ce73d67554eced322211be0029b7abd4e205788ae399f9063"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.551234 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zmljr" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.552922 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" event={"ID":"3f0bf97a-8ce8-4c0c-b980-34b76047840f","Type":"ContainerStarted","Data":"b939f9db59ea550ea8660a8440081124c0dff069eb8bcf6a24da3258c9efbd64"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.556551 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1","Type":"ContainerStarted","Data":"98a6a5bc9d22666f58d5d22db421a90f4ccfdeb38bb64296c28872fc9c45d7fd"} Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.560393 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-ccb475956-lg8lb" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.634135 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.756527 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.836266956 podStartE2EDuration="24.756505148s" podCreationTimestamp="2025-09-29 09:57:00 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.08142391 +0000 UTC m=+1171.937351938" lastFinishedPulling="2025-09-29 09:57:23.001662112 +0000 UTC m=+1178.857590130" observedRunningTime="2025-09-29 09:57:24.726206562 +0000 UTC m=+1180.582134590" watchObservedRunningTime="2025-09-29 09:57:24.756505148 +0000 UTC m=+1180.612433176" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.766810 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-6584dc9448-s8m2k" podStartSLOduration=16.021529535 podStartE2EDuration="22.766788079s" podCreationTimestamp="2025-09-29 09:57:02 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.847209121 +0000 UTC m=+1172.703137149" lastFinishedPulling="2025-09-29 09:57:23.592467665 +0000 UTC m=+1179.448395693" observedRunningTime="2025-09-29 09:57:24.740443156 +0000 UTC m=+1180.596371184" watchObservedRunningTime="2025-09-29 09:57:24.766788079 +0000 UTC m=+1180.622716117" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.780278 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.650145812 podStartE2EDuration="22.780259173s" podCreationTimestamp="2025-09-29 09:57:02 +0000 UTC" firstStartedPulling="2025-09-29 09:57:15.901484762 +0000 UTC m=+1171.757412790" lastFinishedPulling="2025-09-29 09:57:24.031598123 +0000 UTC m=+1179.887526151" observedRunningTime="2025-09-29 09:57:24.771461181 +0000 UTC m=+1180.627389209" watchObservedRunningTime="2025-09-29 09:57:24.780259173 +0000 UTC m=+1180.636187191" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.792855 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zmljr" podStartSLOduration=13.074558677 podStartE2EDuration="19.792839173s" podCreationTimestamp="2025-09-29 09:57:05 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.870050811 +0000 UTC m=+1172.725978829" lastFinishedPulling="2025-09-29 09:57:23.588331297 +0000 UTC m=+1179.444259325" observedRunningTime="2025-09-29 09:57:24.789934517 +0000 UTC m=+1180.645862565" watchObservedRunningTime="2025-09-29 09:57:24.792839173 +0000 UTC m=+1180.648767201" Sep 29 09:57:24 crc kubenswrapper[4991]: I0929 09:57:24.943998 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bf3b6e-0955-4b0d-a3be-ba817af85588" path="/var/lib/kubelet/pods/29bf3b6e-0955-4b0d-a3be-ba817af85588/volumes" Sep 29 09:57:25 crc kubenswrapper[4991]: I0929 09:57:25.567599 4991 generic.go:334] "Generic (PLEG): container finished" podID="11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44" containerID="e0d1e802fe2e27f75170e813c6b4352d83308aab467aa48d496f66b0a34d9b50" exitCode=0 Sep 29 09:57:25 crc kubenswrapper[4991]: I0929 09:57:25.569292 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rr8td" event={"ID":"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44","Type":"ContainerDied","Data":"e0d1e802fe2e27f75170e813c6b4352d83308aab467aa48d496f66b0a34d9b50"} Sep 29 09:57:26 crc kubenswrapper[4991]: I0929 09:57:26.580234 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rr8td" event={"ID":"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44","Type":"ContainerStarted","Data":"c1ac1efaabfed1ba5631c47b15b9a727e27943bd04457662912004732674eaf4"} Sep 29 09:57:27 crc kubenswrapper[4991]: I0929 09:57:27.590348 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerStarted","Data":"418ee5bd99b2ddebf9b9385dedbcb7251673173c4e607d6764df201111c856e2"} Sep 29 09:57:29 crc kubenswrapper[4991]: I0929 09:57:29.622635 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rr8td" event={"ID":"11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44","Type":"ContainerStarted","Data":"c7890acd12b8a6039937823aa9f8e38bede5ddf159c81856b46b16b53b297f7b"} Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.350279 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.635125 4991 generic.go:334] "Generic (PLEG): container finished" podID="c4a8f2f9-4e24-4728-bf11-a0b6f50094d1" containerID="98a6a5bc9d22666f58d5d22db421a90f4ccfdeb38bb64296c28872fc9c45d7fd" exitCode=0 Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.635177 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1","Type":"ContainerDied","Data":"98a6a5bc9d22666f58d5d22db421a90f4ccfdeb38bb64296c28872fc9c45d7fd"} Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.637539 4991 generic.go:334] "Generic (PLEG): container finished" podID="6920a7a6-0725-4512-8b3c-dcf7ba2c8587" containerID="1e593838bc1dca6335719b3c77d1da6d91e1d80047fff396b08c8aa63267bcb6" exitCode=0 Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.637617 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6920a7a6-0725-4512-8b3c-dcf7ba2c8587","Type":"ContainerDied","Data":"1e593838bc1dca6335719b3c77d1da6d91e1d80047fff396b08c8aa63267bcb6"} Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.637861 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.637882 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:30 crc kubenswrapper[4991]: I0929 09:57:30.702277 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rr8td" podStartSLOduration=18.911266331 podStartE2EDuration="25.702225477s" podCreationTimestamp="2025-09-29 09:57:05 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.522352455 +0000 UTC m=+1172.378280483" lastFinishedPulling="2025-09-29 09:57:23.313311601 +0000 UTC m=+1179.169239629" observedRunningTime="2025-09-29 09:57:30.693218681 +0000 UTC m=+1186.549146709" watchObservedRunningTime="2025-09-29 09:57:30.702225477 +0000 UTC m=+1186.558153505" Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.646920 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4a8f2f9-4e24-4728-bf11-a0b6f50094d1","Type":"ContainerStarted","Data":"65d820e2d1e90625ac19bf99f5265dedd3818d7ee491c1f3f7ad4992b116d512"} Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.649248 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4834ad3-a9f2-47e2-b3ec-6cac39b88fef","Type":"ContainerStarted","Data":"c34ea376eb31f96ce0cb5d7dcfc1fa9d39bbc836364c778a8638a6287dbfeda6"} Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.653466 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"463a171c-5755-4898-a973-f72db6a319f0","Type":"ContainerStarted","Data":"5746197fdf9910ef6e5dadd95a4d18d972ab24edff665266486f8fc2d4a0b7b3"} Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.656842 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6920a7a6-0725-4512-8b3c-dcf7ba2c8587","Type":"ContainerStarted","Data":"5834be2aa9f8a79aa733f52bc74ea14c4508c275c6be6fdf4ae66d2526d760ad"} Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.672756 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.316737364 podStartE2EDuration="33.672740236s" podCreationTimestamp="2025-09-29 09:56:58 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.085006954 +0000 UTC m=+1171.940934982" lastFinishedPulling="2025-09-29 09:57:23.441009836 +0000 UTC m=+1179.296937854" observedRunningTime="2025-09-29 09:57:31.671147994 +0000 UTC m=+1187.527076042" watchObservedRunningTime="2025-09-29 09:57:31.672740236 +0000 UTC m=+1187.528668264" Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.702966 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.036127723 podStartE2EDuration="26.702925169s" podCreationTimestamp="2025-09-29 09:57:05 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.452432618 +0000 UTC m=+1172.308360646" lastFinishedPulling="2025-09-29 09:57:31.119230064 +0000 UTC m=+1186.975158092" observedRunningTime="2025-09-29 09:57:31.690866822 +0000 UTC m=+1187.546794870" watchObservedRunningTime="2025-09-29 09:57:31.702925169 +0000 UTC m=+1187.558853207" Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.717921 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.891032432 podStartE2EDuration="23.717903783s" podCreationTimestamp="2025-09-29 09:57:08 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.359854086 +0000 UTC m=+1172.215782124" lastFinishedPulling="2025-09-29 09:57:31.186725447 +0000 UTC m=+1187.042653475" observedRunningTime="2025-09-29 09:57:31.711528625 +0000 UTC m=+1187.567456663" watchObservedRunningTime="2025-09-29 09:57:31.717903783 +0000 UTC m=+1187.573831811" Sep 29 09:57:31 crc kubenswrapper[4991]: I0929 09:57:31.740087 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.110197464 podStartE2EDuration="33.740070455s" podCreationTimestamp="2025-09-29 09:56:58 +0000 UTC" firstStartedPulling="2025-09-29 09:57:13.568053454 +0000 UTC m=+1169.423981482" lastFinishedPulling="2025-09-29 09:57:22.197926445 +0000 UTC m=+1178.053854473" observedRunningTime="2025-09-29 09:57:31.733092062 +0000 UTC m=+1187.589020090" watchObservedRunningTime="2025-09-29 09:57:31.740070455 +0000 UTC m=+1187.595998483" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.227152 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.627304 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.729218 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.730798 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.754089 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.793267 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.793365 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mk9d\" (UniqueName: \"kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.793405 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.894772 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mk9d\" (UniqueName: \"kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.894843 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.895063 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.896011 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.897000 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:32 crc kubenswrapper[4991]: I0929 09:57:32.914053 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mk9d\" (UniqueName: \"kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d\") pod \"dnsmasq-dns-7cb5889db5-vkxw7\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.052651 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.539974 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.675317 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" event={"ID":"53634a95-3483-4012-b946-2c46a10a3eae","Type":"ContainerStarted","Data":"32ec352a04cbcaba4350af1d0593d22c69a310529a57751d2a949889a5afe92f"} Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.839996 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.848434 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.851881 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.854180 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.856251 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-h4vsd" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.856259 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Sep 29 09:57:33 crc kubenswrapper[4991]: I0929 09:57:33.862327 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.018152 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.018196 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2sx\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-kube-api-access-lw2sx\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.018226 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.018591 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-cache\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.018633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-lock\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.027824 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.119995 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120062 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2sx\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-kube-api-access-lw2sx\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120092 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120144 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-cache\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120161 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-lock\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.120313 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.120341 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.120393 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:57:34.620375937 +0000 UTC m=+1190.476303965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120470 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120611 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-lock\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.120797 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81d702cb-530c-441d-b686-f205337a2aea-cache\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.155222 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.163758 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2sx\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-kube-api-access-lw2sx\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.223356 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.227064 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.272719 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.364398 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zftlr"] Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.378268 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zftlr"] Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.378628 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.383356 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.383459 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.383692 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530097 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530238 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530306 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530348 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530388 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530418 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.530445 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bn4\" (UniqueName: \"kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.632685 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633062 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633203 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633357 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bn4\" (UniqueName: \"kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633493 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633707 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633849 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633976 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.633927 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.634238 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.634337 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.634330 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.633850 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: E0929 09:57:34.634522 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:57:35.634338682 +0000 UTC m=+1191.490266710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.637862 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.638703 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.643822 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.654130 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bn4\" (UniqueName: \"kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4\") pod \"swift-ring-rebalance-zftlr\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.685330 4991 generic.go:334] "Generic (PLEG): container finished" podID="53634a95-3483-4012-b946-2c46a10a3eae" containerID="85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2" exitCode=0 Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.687253 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" event={"ID":"53634a95-3483-4012-b946-2c46a10a3eae","Type":"ContainerDied","Data":"85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2"} Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.688092 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.701101 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.760264 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.771924 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.919642 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.958256 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-r9nrm"] Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.960903 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:34 crc kubenswrapper[4991]: I0929 09:57:34.963453 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.004537 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-r9nrm"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.044287 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.044682 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.044719 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.044788 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbtk\" (UniqueName: \"kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.126994 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-r9nrm"] Sep 29 09:57:35 crc kubenswrapper[4991]: E0929 09:57:35.127990 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-rfbtk ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" podUID="bcff0db7-75fa-4871-bdc8-751157a05300" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.140476 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-798np"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.141914 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.144386 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.146125 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.146175 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.146226 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbtk\" (UniqueName: \"kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.146299 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.147412 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.148217 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.152839 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-798np"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.154805 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.175744 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbtk\" (UniqueName: \"kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk\") pod \"dnsmasq-dns-57d65f699f-r9nrm\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.238098 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.240678 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248001 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovn-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248113 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248192 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovs-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248220 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-combined-ca-bundle\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248254 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6j9b\" (UniqueName: \"kubernetes.io/projected/1abeed70-62ab-4d1b-811b-330c2554c1d9-kube-api-access-z6j9b\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.248276 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abeed70-62ab-4d1b-811b-330c2554c1d9-config\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.249314 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.259338 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.277063 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.278823 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.281209 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.281419 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.281634 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.281802 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8272t" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.297134 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.309624 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zftlr"] Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.349885 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.349929 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6j9b\" (UniqueName: \"kubernetes.io/projected/1abeed70-62ab-4d1b-811b-330c2554c1d9-kube-api-access-z6j9b\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.349977 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.349995 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abeed70-62ab-4d1b-811b-330c2554c1d9-config\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.350029 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovn-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.350053 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.350099 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-scripts\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.350126 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.350154 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-config\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354356 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354445 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354490 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354510 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354554 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354597 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glznl\" (UniqueName: \"kubernetes.io/projected/9906e62d-6d99-4043-9fce-d198974e73bc-kube-api-access-glznl\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354730 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovs-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354790 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-combined-ca-bundle\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.354837 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8v5\" (UniqueName: \"kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.355928 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovs-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.356159 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1abeed70-62ab-4d1b-811b-330c2554c1d9-ovn-rundir\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.356280 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abeed70-62ab-4d1b-811b-330c2554c1d9-config\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.359724 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.361274 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abeed70-62ab-4d1b-811b-330c2554c1d9-combined-ca-bundle\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.370991 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6j9b\" (UniqueName: \"kubernetes.io/projected/1abeed70-62ab-4d1b-811b-330c2554c1d9-kube-api-access-z6j9b\") pod \"ovn-controller-metrics-798np\" (UID: \"1abeed70-62ab-4d1b-811b-330c2554c1d9\") " pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456593 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8v5\" (UniqueName: \"kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456656 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456682 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456718 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456757 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-scripts\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456782 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456801 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-config\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456860 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456881 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456896 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456924 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.456963 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glznl\" (UniqueName: \"kubernetes.io/projected/9906e62d-6d99-4043-9fce-d198974e73bc-kube-api-access-glznl\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.458451 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.458623 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.458692 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-config\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.458817 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9906e62d-6d99-4043-9fce-d198974e73bc-scripts\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.459013 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.459156 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.459794 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.461370 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.461658 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.462167 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9906e62d-6d99-4043-9fce-d198974e73bc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.474516 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8v5\" (UniqueName: \"kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5\") pod \"dnsmasq-dns-b8fbc5445-8qbbc\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.474669 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glznl\" (UniqueName: \"kubernetes.io/projected/9906e62d-6d99-4043-9fce-d198974e73bc-kube-api-access-glznl\") pod \"ovn-northd-0\" (UID: \"9906e62d-6d99-4043-9fce-d198974e73bc\") " pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.493381 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-798np" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.571165 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.641740 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.662377 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:35 crc kubenswrapper[4991]: E0929 09:57:35.662664 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:35 crc kubenswrapper[4991]: E0929 09:57:35.662698 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:35 crc kubenswrapper[4991]: E0929 09:57:35.662746 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:57:37.662731631 +0000 UTC m=+1193.518659659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.719935 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zftlr" event={"ID":"cf1a0436-f3cf-4582-9634-58f354d9badf","Type":"ContainerStarted","Data":"c924fbca35a918d757823d66e6050877fcac49198d6e3c073a59545ebbc0971f"} Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.728501 4991 generic.go:334] "Generic (PLEG): container finished" podID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerID="418ee5bd99b2ddebf9b9385dedbcb7251673173c4e607d6764df201111c856e2" exitCode=0 Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.728579 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerDied","Data":"418ee5bd99b2ddebf9b9385dedbcb7251673173c4e607d6764df201111c856e2"} Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.741891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" event={"ID":"53634a95-3483-4012-b946-2c46a10a3eae","Type":"ContainerStarted","Data":"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a"} Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.743070 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.743329 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="dnsmasq-dns" containerID="cri-o://3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a" gracePeriod=10 Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.769511 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.832328 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" podStartSLOduration=3.832305487 podStartE2EDuration="3.832305487s" podCreationTimestamp="2025-09-29 09:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:35.795459918 +0000 UTC m=+1191.651387946" watchObservedRunningTime="2025-09-29 09:57:35.832305487 +0000 UTC m=+1191.688233505" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.967151 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc\") pod \"bcff0db7-75fa-4871-bdc8-751157a05300\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.967298 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config\") pod \"bcff0db7-75fa-4871-bdc8-751157a05300\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.967358 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbtk\" (UniqueName: \"kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk\") pod \"bcff0db7-75fa-4871-bdc8-751157a05300\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.967425 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb\") pod \"bcff0db7-75fa-4871-bdc8-751157a05300\" (UID: \"bcff0db7-75fa-4871-bdc8-751157a05300\") " Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.968250 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcff0db7-75fa-4871-bdc8-751157a05300" (UID: "bcff0db7-75fa-4871-bdc8-751157a05300"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.968257 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bcff0db7-75fa-4871-bdc8-751157a05300" (UID: "bcff0db7-75fa-4871-bdc8-751157a05300"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.968658 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config" (OuterVolumeSpecName: "config") pod "bcff0db7-75fa-4871-bdc8-751157a05300" (UID: "bcff0db7-75fa-4871-bdc8-751157a05300"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:35 crc kubenswrapper[4991]: I0929 09:57:35.974396 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk" (OuterVolumeSpecName: "kube-api-access-rfbtk") pod "bcff0db7-75fa-4871-bdc8-751157a05300" (UID: "bcff0db7-75fa-4871-bdc8-751157a05300"). InnerVolumeSpecName "kube-api-access-rfbtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.039694 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-798np"] Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.069705 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.069854 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.070364 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcff0db7-75fa-4871-bdc8-751157a05300-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.071732 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbtk\" (UniqueName: \"kubernetes.io/projected/bcff0db7-75fa-4871-bdc8-751157a05300-kube-api-access-rfbtk\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.254623 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.523610 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 29 09:57:36 crc kubenswrapper[4991]: W0929 09:57:36.547272 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9906e62d_6d99_4043_9fce_d198974e73bc.slice/crio-667ed2ec708934e35999fe7a161ad3c01c2e5c6e194e23c4d1a47b13618c3cdd WatchSource:0}: Error finding container 667ed2ec708934e35999fe7a161ad3c01c2e5c6e194e23c4d1a47b13618c3cdd: Status 404 returned error can't find the container with id 667ed2ec708934e35999fe7a161ad3c01c2e5c6e194e23c4d1a47b13618c3cdd Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.691992 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.770697 4991 generic.go:334] "Generic (PLEG): container finished" podID="53634a95-3483-4012-b946-2c46a10a3eae" containerID="3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a" exitCode=0 Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.770801 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.770973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" event={"ID":"53634a95-3483-4012-b946-2c46a10a3eae","Type":"ContainerDied","Data":"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.771014 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-vkxw7" event={"ID":"53634a95-3483-4012-b946-2c46a10a3eae","Type":"ContainerDied","Data":"32ec352a04cbcaba4350af1d0593d22c69a310529a57751d2a949889a5afe92f"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.771033 4991 scope.go:117] "RemoveContainer" containerID="3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.778200 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-798np" event={"ID":"1abeed70-62ab-4d1b-811b-330c2554c1d9","Type":"ContainerStarted","Data":"b43566a739b3f1085a06f3b73cd22e4db0682a02abba23496554bc4a7bed2232"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.778280 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-798np" event={"ID":"1abeed70-62ab-4d1b-811b-330c2554c1d9","Type":"ContainerStarted","Data":"0d9ede79eb84d5435dad37b3c241b476357afa16cf1e24a0796659dc0c520ae7"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.798133 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mk9d\" (UniqueName: \"kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d\") pod \"53634a95-3483-4012-b946-2c46a10a3eae\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.798238 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc\") pod \"53634a95-3483-4012-b946-2c46a10a3eae\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.798298 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config\") pod \"53634a95-3483-4012-b946-2c46a10a3eae\" (UID: \"53634a95-3483-4012-b946-2c46a10a3eae\") " Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.799784 4991 generic.go:334] "Generic (PLEG): container finished" podID="3537cb6d-a346-4779-820c-262f5ae33b35" containerID="74534548f596ca46489bf61abe8d939e48497411ced989caa776d7ac74954d33" exitCode=0 Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.799845 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" event={"ID":"3537cb6d-a346-4779-820c-262f5ae33b35","Type":"ContainerDied","Data":"74534548f596ca46489bf61abe8d939e48497411ced989caa776d7ac74954d33"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.799870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" event={"ID":"3537cb6d-a346-4779-820c-262f5ae33b35","Type":"ContainerStarted","Data":"61e9c9d8a189d39500e795fe2cde8595bed33c4efaff37008831c31131c37d64"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.806478 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-r9nrm" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.807207 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d" (OuterVolumeSpecName: "kube-api-access-6mk9d") pod "53634a95-3483-4012-b946-2c46a10a3eae" (UID: "53634a95-3483-4012-b946-2c46a10a3eae"). InnerVolumeSpecName "kube-api-access-6mk9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.807393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9906e62d-6d99-4043-9fce-d198974e73bc","Type":"ContainerStarted","Data":"667ed2ec708934e35999fe7a161ad3c01c2e5c6e194e23c4d1a47b13618c3cdd"} Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.809718 4991 scope.go:117] "RemoveContainer" containerID="85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.828152 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-798np" podStartSLOduration=1.828130631 podStartE2EDuration="1.828130631s" podCreationTimestamp="2025-09-29 09:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:36.811170116 +0000 UTC m=+1192.667098144" watchObservedRunningTime="2025-09-29 09:57:36.828130631 +0000 UTC m=+1192.684058659" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.876770 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config" (OuterVolumeSpecName: "config") pod "53634a95-3483-4012-b946-2c46a10a3eae" (UID: "53634a95-3483-4012-b946-2c46a10a3eae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.888743 4991 scope.go:117] "RemoveContainer" containerID="3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a" Sep 29 09:57:36 crc kubenswrapper[4991]: E0929 09:57:36.890343 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a\": container with ID starting with 3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a not found: ID does not exist" containerID="3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.890397 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a"} err="failed to get container status \"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a\": rpc error: code = NotFound desc = could not find container \"3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a\": container with ID starting with 3c91d0c9bef3d1e348cfbd0e5e125d014624ae0c390aa877ce43e21fecdf3b5a not found: ID does not exist" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.890453 4991 scope.go:117] "RemoveContainer" containerID="85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2" Sep 29 09:57:36 crc kubenswrapper[4991]: E0929 09:57:36.897288 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2\": container with ID starting with 85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2 not found: ID does not exist" containerID="85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.897342 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2"} err="failed to get container status \"85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2\": rpc error: code = NotFound desc = could not find container \"85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2\": container with ID starting with 85c40be27a96fb9bc8665722558fcd238467a6d4988e41ce172a846df9a98df2 not found: ID does not exist" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.901627 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.901672 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mk9d\" (UniqueName: \"kubernetes.io/projected/53634a95-3483-4012-b946-2c46a10a3eae-kube-api-access-6mk9d\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.920434 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-r9nrm"] Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.921545 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53634a95-3483-4012-b946-2c46a10a3eae" (UID: "53634a95-3483-4012-b946-2c46a10a3eae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:36 crc kubenswrapper[4991]: I0929 09:57:36.943497 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-r9nrm"] Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.004623 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53634a95-3483-4012-b946-2c46a10a3eae-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.094235 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.102280 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-vkxw7"] Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.727488 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:37 crc kubenswrapper[4991]: E0929 09:57:37.727652 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:37 crc kubenswrapper[4991]: E0929 09:57:37.727983 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:37 crc kubenswrapper[4991]: E0929 09:57:37.728074 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:57:41.728045696 +0000 UTC m=+1197.583973714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.822625 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" event={"ID":"3537cb6d-a346-4779-820c-262f5ae33b35","Type":"ContainerStarted","Data":"4032990bc691747e5a559241e27500181846fc8842f1ef005515844990e7c6b0"} Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.823011 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.842376 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podStartSLOduration=2.842350489 podStartE2EDuration="2.842350489s" podCreationTimestamp="2025-09-29 09:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:37.838020045 +0000 UTC m=+1193.693948083" watchObservedRunningTime="2025-09-29 09:57:37.842350489 +0000 UTC m=+1193.698278517" Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.946300 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:57:37 crc kubenswrapper[4991]: I0929 09:57:37.946347 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:57:38 crc kubenswrapper[4991]: I0929 09:57:38.941516 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53634a95-3483-4012-b946-2c46a10a3eae" path="/var/lib/kubelet/pods/53634a95-3483-4012-b946-2c46a10a3eae/volumes" Sep 29 09:57:38 crc kubenswrapper[4991]: I0929 09:57:38.942271 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcff0db7-75fa-4871-bdc8-751157a05300" path="/var/lib/kubelet/pods/bcff0db7-75fa-4871-bdc8-751157a05300/volumes" Sep 29 09:57:39 crc kubenswrapper[4991]: I0929 09:57:39.836568 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Sep 29 09:57:39 crc kubenswrapper[4991]: I0929 09:57:39.836981 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Sep 29 09:57:39 crc kubenswrapper[4991]: I0929 09:57:39.917704 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.003358 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.218173 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.218226 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.277307 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.398329 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hbj7w"] Sep 29 09:57:40 crc kubenswrapper[4991]: E0929 09:57:40.398827 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="init" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.398845 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="init" Sep 29 09:57:40 crc kubenswrapper[4991]: E0929 09:57:40.398862 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="dnsmasq-dns" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.398870 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="dnsmasq-dns" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.399082 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="53634a95-3483-4012-b946-2c46a10a3eae" containerName="dnsmasq-dns" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.399829 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.410283 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hbj7w"] Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.507680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np6j\" (UniqueName: \"kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j\") pod \"placement-db-create-hbj7w\" (UID: \"03fc3061-8749-42bc-827d-f8bdb437fe58\") " pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.574390 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bvdvb"] Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.575630 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.584337 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bvdvb"] Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.610138 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np6j\" (UniqueName: \"kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j\") pod \"placement-db-create-hbj7w\" (UID: \"03fc3061-8749-42bc-827d-f8bdb437fe58\") " pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.629044 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np6j\" (UniqueName: \"kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j\") pod \"placement-db-create-hbj7w\" (UID: \"03fc3061-8749-42bc-827d-f8bdb437fe58\") " pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.712308 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xlp\" (UniqueName: \"kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp\") pod \"glance-db-create-bvdvb\" (UID: \"71eeb128-047f-4efb-b2ef-6b2e69c1f23a\") " pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.721539 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.814163 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xlp\" (UniqueName: \"kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp\") pod \"glance-db-create-bvdvb\" (UID: \"71eeb128-047f-4efb-b2ef-6b2e69c1f23a\") " pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.850125 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xlp\" (UniqueName: \"kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp\") pod \"glance-db-create-bvdvb\" (UID: \"71eeb128-047f-4efb-b2ef-6b2e69c1f23a\") " pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.893756 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:40 crc kubenswrapper[4991]: I0929 09:57:40.942626 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Sep 29 09:57:41 crc kubenswrapper[4991]: I0929 09:57:41.733681 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:41 crc kubenswrapper[4991]: E0929 09:57:41.734141 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:41 crc kubenswrapper[4991]: E0929 09:57:41.734156 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:41 crc kubenswrapper[4991]: E0929 09:57:41.734217 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:57:49.734203484 +0000 UTC m=+1205.590131512 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.376636 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-d9p8b"] Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.406535 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-d9p8b"] Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.406644 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.556065 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh\") pod \"mysqld-exporter-openstack-db-create-d9p8b\" (UID: \"40c49f55-8da0-4a5b-8a85-78f2665dddf5\") " pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.658434 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh\") pod \"mysqld-exporter-openstack-db-create-d9p8b\" (UID: \"40c49f55-8da0-4a5b-8a85-78f2665dddf5\") " pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.681531 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh\") pod \"mysqld-exporter-openstack-db-create-d9p8b\" (UID: \"40c49f55-8da0-4a5b-8a85-78f2665dddf5\") " pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:42 crc kubenswrapper[4991]: I0929 09:57:42.739190 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.527991 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bvdvb"] Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.539240 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-d9p8b"] Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.736773 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hbj7w"] Sep 29 09:57:44 crc kubenswrapper[4991]: W0929 09:57:44.739273 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fc3061_8749_42bc_827d_f8bdb437fe58.slice/crio-01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc WatchSource:0}: Error finding container 01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc: Status 404 returned error can't find the container with id 01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.960374 4991 generic.go:334] "Generic (PLEG): container finished" podID="71eeb128-047f-4efb-b2ef-6b2e69c1f23a" containerID="62b21760df5346b1dced2b3ef66007b19050b635ba119143e09f2b6113537ea1" exitCode=0 Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.961924 4991 generic.go:334] "Generic (PLEG): container finished" podID="40c49f55-8da0-4a5b-8a85-78f2665dddf5" containerID="f93d654b987e4c68807e3a3f0c2640cf6f05ffc32a2feb36dfee6adace67ca95" exitCode=0 Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966308 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvdvb" event={"ID":"71eeb128-047f-4efb-b2ef-6b2e69c1f23a","Type":"ContainerDied","Data":"62b21760df5346b1dced2b3ef66007b19050b635ba119143e09f2b6113537ea1"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966361 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvdvb" event={"ID":"71eeb128-047f-4efb-b2ef-6b2e69c1f23a","Type":"ContainerStarted","Data":"eb6d83792b61232e282ec1e87c91b8c1ed287fff086892030a80fe96f85a2cd7"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966422 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" event={"ID":"40c49f55-8da0-4a5b-8a85-78f2665dddf5","Type":"ContainerDied","Data":"f93d654b987e4c68807e3a3f0c2640cf6f05ffc32a2feb36dfee6adace67ca95"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966470 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" event={"ID":"40c49f55-8da0-4a5b-8a85-78f2665dddf5","Type":"ContainerStarted","Data":"34f826d05aad98ba4c967efdf51224cc24412f6ebca654f78ebc020ecf3ac921"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966481 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zftlr" event={"ID":"cf1a0436-f3cf-4582-9634-58f354d9badf","Type":"ContainerStarted","Data":"351339d64f082c28d332007d610267ab830d2b1e783488f79edf51b554a4508c"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966494 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerStarted","Data":"2d248fc94532dddc4c4a2b0944496453e7c81158f64bd213dd5d049621855747"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.966993 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hbj7w" event={"ID":"03fc3061-8749-42bc-827d-f8bdb437fe58","Type":"ContainerStarted","Data":"01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.970246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9906e62d-6d99-4043-9fce-d198974e73bc","Type":"ContainerStarted","Data":"e6a2f48df4a86efb12d0a028c6c2b52c0657478ff694e6ecc5e0d7072cccff88"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.970307 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9906e62d-6d99-4043-9fce-d198974e73bc","Type":"ContainerStarted","Data":"330b27ed1fbaa5d634a55e928eb98cc14e1db2b73b1b27f574924bd16e475aab"} Sep 29 09:57:44 crc kubenswrapper[4991]: I0929 09:57:44.970400 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.096666 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zftlr" podStartSLOduration=2.303775504 podStartE2EDuration="11.096646428s" podCreationTimestamp="2025-09-29 09:57:34 +0000 UTC" firstStartedPulling="2025-09-29 09:57:35.289435493 +0000 UTC m=+1191.145363511" lastFinishedPulling="2025-09-29 09:57:44.082306407 +0000 UTC m=+1199.938234435" observedRunningTime="2025-09-29 09:57:45.089003268 +0000 UTC m=+1200.944931306" watchObservedRunningTime="2025-09-29 09:57:45.096646428 +0000 UTC m=+1200.952574456" Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.127609 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.712317882 podStartE2EDuration="10.127589661s" podCreationTimestamp="2025-09-29 09:57:35 +0000 UTC" firstStartedPulling="2025-09-29 09:57:36.560387456 +0000 UTC m=+1192.416315484" lastFinishedPulling="2025-09-29 09:57:43.975659235 +0000 UTC m=+1199.831587263" observedRunningTime="2025-09-29 09:57:45.114431756 +0000 UTC m=+1200.970359804" watchObservedRunningTime="2025-09-29 09:57:45.127589661 +0000 UTC m=+1200.983517689" Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.573136 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.645552 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.649186 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="dnsmasq-dns" containerID="cri-o://1e3cf3f3154016fade1fd3ed4636d9c039b484b378cef310f70e9738d2c558ca" gracePeriod=10 Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.988615 4991 generic.go:334] "Generic (PLEG): container finished" podID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerID="7910fd79a1c65359a5d7994d2bfbc4276d57b7a7ff1a657124cf305e57695eac" exitCode=0 Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.988902 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerDied","Data":"7910fd79a1c65359a5d7994d2bfbc4276d57b7a7ff1a657124cf305e57695eac"} Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.994680 4991 generic.go:334] "Generic (PLEG): container finished" podID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerID="1e3cf3f3154016fade1fd3ed4636d9c039b484b378cef310f70e9738d2c558ca" exitCode=0 Sep 29 09:57:45 crc kubenswrapper[4991]: I0929 09:57:45.994772 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" event={"ID":"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c","Type":"ContainerDied","Data":"1e3cf3f3154016fade1fd3ed4636d9c039b484b378cef310f70e9738d2c558ca"} Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.006126 4991 generic.go:334] "Generic (PLEG): container finished" podID="03fc3061-8749-42bc-827d-f8bdb437fe58" containerID="35a17484b805a74cc582ae3719db59382d4d124307e40e00b961d1b4d67fc0bf" exitCode=0 Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.007169 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hbj7w" event={"ID":"03fc3061-8749-42bc-827d-f8bdb437fe58","Type":"ContainerDied","Data":"35a17484b805a74cc582ae3719db59382d4d124307e40e00b961d1b4d67fc0bf"} Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.451835 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.574243 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config\") pod \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.574396 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59cp\" (UniqueName: \"kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp\") pod \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.574424 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc\") pod \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\" (UID: \"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c\") " Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.648385 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp" (OuterVolumeSpecName: "kube-api-access-h59cp") pod "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" (UID: "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c"). InnerVolumeSpecName "kube-api-access-h59cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.677432 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59cp\" (UniqueName: \"kubernetes.io/projected/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-kube-api-access-h59cp\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.702673 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" (UID: "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.713078 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.715776 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config" (OuterVolumeSpecName: "config") pod "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" (UID: "9811182d-18c6-4f7f-8cf9-426d5f5d3c0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.725995 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.778568 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh\") pod \"40c49f55-8da0-4a5b-8a85-78f2665dddf5\" (UID: \"40c49f55-8da0-4a5b-8a85-78f2665dddf5\") " Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.778677 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9xlp\" (UniqueName: \"kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp\") pod \"71eeb128-047f-4efb-b2ef-6b2e69c1f23a\" (UID: \"71eeb128-047f-4efb-b2ef-6b2e69c1f23a\") " Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.780466 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.780496 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.784929 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh" (OuterVolumeSpecName: "kube-api-access-qmxgh") pod "40c49f55-8da0-4a5b-8a85-78f2665dddf5" (UID: "40c49f55-8da0-4a5b-8a85-78f2665dddf5"). InnerVolumeSpecName "kube-api-access-qmxgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.785343 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp" (OuterVolumeSpecName: "kube-api-access-x9xlp") pod "71eeb128-047f-4efb-b2ef-6b2e69c1f23a" (UID: "71eeb128-047f-4efb-b2ef-6b2e69c1f23a"). InnerVolumeSpecName "kube-api-access-x9xlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:46 crc kubenswrapper[4991]: E0929 09:57:46.869208 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e6fe0c_d910_462d_8181_b99f4b28091f.slice/crio-ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d.scope\": RecentStats: unable to find data in memory cache]" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.882496 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/40c49f55-8da0-4a5b-8a85-78f2665dddf5-kube-api-access-qmxgh\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:46 crc kubenswrapper[4991]: I0929 09:57:46.882536 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9xlp\" (UniqueName: \"kubernetes.io/projected/71eeb128-047f-4efb-b2ef-6b2e69c1f23a-kube-api-access-x9xlp\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.016199 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvdvb" event={"ID":"71eeb128-047f-4efb-b2ef-6b2e69c1f23a","Type":"ContainerDied","Data":"eb6d83792b61232e282ec1e87c91b8c1ed287fff086892030a80fe96f85a2cd7"} Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.016470 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6d83792b61232e282ec1e87c91b8c1ed287fff086892030a80fe96f85a2cd7" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.016412 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvdvb" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.017529 4991 generic.go:334] "Generic (PLEG): container finished" podID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerID="ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d" exitCode=0 Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.017577 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerDied","Data":"ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d"} Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.019140 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.019195 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-d9p8b" event={"ID":"40c49f55-8da0-4a5b-8a85-78f2665dddf5","Type":"ContainerDied","Data":"34f826d05aad98ba4c967efdf51224cc24412f6ebca654f78ebc020ecf3ac921"} Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.019253 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f826d05aad98ba4c967efdf51224cc24412f6ebca654f78ebc020ecf3ac921" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.021393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerStarted","Data":"a6b30445ea27df6dddee6be63497a9e70ff273642530d6926415c40cad173d44"} Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.021840 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.025357 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" event={"ID":"9811182d-18c6-4f7f-8cf9-426d5f5d3c0c","Type":"ContainerDied","Data":"27d7e25fabe36e2232719052fdca73c31285a049e6132799c1bc9bc666e8bdb7"} Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.025389 4991 scope.go:117] "RemoveContainer" containerID="1e3cf3f3154016fade1fd3ed4636d9c039b484b378cef310f70e9738d2c558ca" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.025423 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9tt7r" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.454784 4991 scope.go:117] "RemoveContainer" containerID="298cb352ec916c74ded5d9a2041c177a7c818c26e7fd91a1b7a389e468c6d334" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.469981 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.488609 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.706742547 podStartE2EDuration="51.488587654s" podCreationTimestamp="2025-09-29 09:56:56 +0000 UTC" firstStartedPulling="2025-09-29 09:57:03.204132021 +0000 UTC m=+1159.060060049" lastFinishedPulling="2025-09-29 09:57:14.985977118 +0000 UTC m=+1170.841905156" observedRunningTime="2025-09-29 09:57:47.082880824 +0000 UTC m=+1202.938808872" watchObservedRunningTime="2025-09-29 09:57:47.488587654 +0000 UTC m=+1203.344515682" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.498282 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9np6j\" (UniqueName: \"kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j\") pod \"03fc3061-8749-42bc-827d-f8bdb437fe58\" (UID: \"03fc3061-8749-42bc-827d-f8bdb437fe58\") " Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.500898 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.503491 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j" (OuterVolumeSpecName: "kube-api-access-9np6j") pod "03fc3061-8749-42bc-827d-f8bdb437fe58" (UID: "03fc3061-8749-42bc-827d-f8bdb437fe58"). InnerVolumeSpecName "kube-api-access-9np6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.508907 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9tt7r"] Sep 29 09:57:47 crc kubenswrapper[4991]: I0929 09:57:47.622774 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9np6j\" (UniqueName: \"kubernetes.io/projected/03fc3061-8749-42bc-827d-f8bdb437fe58-kube-api-access-9np6j\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.038553 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerStarted","Data":"a7e3c30afcc534a481914ee298121a079ed738e1d07dddb66ab852e036c2478d"} Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.039826 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.043064 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerStarted","Data":"b9ef7a4660e829242151b71b233301094242a3d1268bc0f4a1d4b98103f458f0"} Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.047358 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hbj7w" Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.050262 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hbj7w" event={"ID":"03fc3061-8749-42bc-827d-f8bdb437fe58","Type":"ContainerDied","Data":"01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc"} Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.050341 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d6c88091983540d3268f527641de4c2d28cf91cce7285b8990404c7ead58fc" Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.071183 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.071166171 podStartE2EDuration="52.071166171s" podCreationTimestamp="2025-09-29 09:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:48.066279572 +0000 UTC m=+1203.922207620" watchObservedRunningTime="2025-09-29 09:57:48.071166171 +0000 UTC m=+1203.927094199" Sep 29 09:57:48 crc kubenswrapper[4991]: I0929 09:57:48.937061 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" path="/var/lib/kubelet/pods/9811182d-18c6-4f7f-8cf9-426d5f5d3c0c/volumes" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.718772 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7fbf9bb75f-prqdm" podUID="223e3abd-c0d7-4392-8965-368d980373a1" containerName="console" containerID="cri-o://345a2772695546b42fe56df8d080a7b5843090537a517a8e213fa1e7a65d7508" gracePeriod=15 Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.763735 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.764095 4991 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.764145 4991 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.764205 4991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift podName:81d702cb-530c-441d-b686-f205337a2aea nodeName:}" failed. No retries permitted until 2025-09-29 09:58:05.764187273 +0000 UTC m=+1221.620115301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift") pod "swift-storage-0" (UID: "81d702cb-530c-441d-b686-f205337a2aea") : configmap "swift-ring-files" not found Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971257 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-z76bp"] Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.971693 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71eeb128-047f-4efb-b2ef-6b2e69c1f23a" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971707 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71eeb128-047f-4efb-b2ef-6b2e69c1f23a" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.971720 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="init" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971726 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="init" Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.971743 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="dnsmasq-dns" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971749 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="dnsmasq-dns" Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.971763 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c49f55-8da0-4a5b-8a85-78f2665dddf5" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971769 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c49f55-8da0-4a5b-8a85-78f2665dddf5" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: E0929 09:57:49.971784 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fc3061-8749-42bc-827d-f8bdb437fe58" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.971790 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fc3061-8749-42bc-827d-f8bdb437fe58" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.972121 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9811182d-18c6-4f7f-8cf9-426d5f5d3c0c" containerName="dnsmasq-dns" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.972138 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="71eeb128-047f-4efb-b2ef-6b2e69c1f23a" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.972152 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fc3061-8749-42bc-827d-f8bdb437fe58" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.972173 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c49f55-8da0-4a5b-8a85-78f2665dddf5" containerName="mariadb-database-create" Sep 29 09:57:49 crc kubenswrapper[4991]: I0929 09:57:49.972812 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:49.997829 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z76bp"] Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.067700 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbf9bb75f-prqdm_223e3abd-c0d7-4392-8965-368d980373a1/console/0.log" Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.067957 4991 generic.go:334] "Generic (PLEG): container finished" podID="223e3abd-c0d7-4392-8965-368d980373a1" containerID="345a2772695546b42fe56df8d080a7b5843090537a517a8e213fa1e7a65d7508" exitCode=2 Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.067982 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbf9bb75f-prqdm" event={"ID":"223e3abd-c0d7-4392-8965-368d980373a1","Type":"ContainerDied","Data":"345a2772695546b42fe56df8d080a7b5843090537a517a8e213fa1e7a65d7508"} Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.070143 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm25\" (UniqueName: \"kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25\") pod \"keystone-db-create-z76bp\" (UID: \"6df4a675-bdfa-4eb3-adc8-5f318c403e1c\") " pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.172120 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm25\" (UniqueName: \"kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25\") pod \"keystone-db-create-z76bp\" (UID: \"6df4a675-bdfa-4eb3-adc8-5f318c403e1c\") " pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.189446 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm25\" (UniqueName: \"kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25\") pod \"keystone-db-create-z76bp\" (UID: \"6df4a675-bdfa-4eb3-adc8-5f318c403e1c\") " pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:50 crc kubenswrapper[4991]: I0929 09:57:50.289974 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.444222 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbf9bb75f-prqdm_223e3abd-c0d7-4392-8965-368d980373a1/console/0.log" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.444761 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.515995 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.516181 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9wzz\" (UniqueName: \"kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517002 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517360 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517395 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517481 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517548 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517657 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert\") pod \"223e3abd-c0d7-4392-8965-368d980373a1\" (UID: \"223e3abd-c0d7-4392-8965-368d980373a1\") " Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.517869 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca" (OuterVolumeSpecName: "service-ca") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.518171 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config" (OuterVolumeSpecName: "console-config") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.518606 4991 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-console-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.518627 4991 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.518639 4991 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-service-ca\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.518595 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.531204 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz" (OuterVolumeSpecName: "kube-api-access-h9wzz") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "kube-api-access-h9wzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.532140 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.543163 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "223e3abd-c0d7-4392-8965-368d980373a1" (UID: "223e3abd-c0d7-4392-8965-368d980373a1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.570803 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z76bp"] Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.620518 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9wzz\" (UniqueName: \"kubernetes.io/projected/223e3abd-c0d7-4392-8965-368d980373a1-kube-api-access-h9wzz\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.620550 4991 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.620559 4991 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223e3abd-c0d7-4392-8965-368d980373a1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:51 crc kubenswrapper[4991]: I0929 09:57:51.620568 4991 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/223e3abd-c0d7-4392-8965-368d980373a1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.087400 4991 generic.go:334] "Generic (PLEG): container finished" podID="6df4a675-bdfa-4eb3-adc8-5f318c403e1c" containerID="54fd573d566fc8964fc38692ccd0ddf99663d601664f64f37d0dcc2686164cc5" exitCode=0 Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.087619 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z76bp" event={"ID":"6df4a675-bdfa-4eb3-adc8-5f318c403e1c","Type":"ContainerDied","Data":"54fd573d566fc8964fc38692ccd0ddf99663d601664f64f37d0dcc2686164cc5"} Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.087912 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z76bp" event={"ID":"6df4a675-bdfa-4eb3-adc8-5f318c403e1c","Type":"ContainerStarted","Data":"9244e3852768ad08469464c8cac21022f33f50e206223ff09c3093e3c7eb2d09"} Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.090291 4991 generic.go:334] "Generic (PLEG): container finished" podID="cf1a0436-f3cf-4582-9634-58f354d9badf" containerID="351339d64f082c28d332007d610267ab830d2b1e783488f79edf51b554a4508c" exitCode=0 Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.090377 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zftlr" event={"ID":"cf1a0436-f3cf-4582-9634-58f354d9badf","Type":"ContainerDied","Data":"351339d64f082c28d332007d610267ab830d2b1e783488f79edf51b554a4508c"} Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.093668 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerStarted","Data":"de193f104eb0ab4333fbe537969f6a2ebc8e315982d7c52923dfc4677fdb7ead"} Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.095404 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbf9bb75f-prqdm_223e3abd-c0d7-4392-8965-368d980373a1/console/0.log" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.095470 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbf9bb75f-prqdm" event={"ID":"223e3abd-c0d7-4392-8965-368d980373a1","Type":"ContainerDied","Data":"335c3407f330a50e15db1766a6f4635875018f45256a0ba7f6803b4ad7717ebc"} Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.095501 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbf9bb75f-prqdm" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.095506 4991 scope.go:117] "RemoveContainer" containerID="345a2772695546b42fe56df8d080a7b5843090537a517a8e213fa1e7a65d7508" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.138800 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.913809025 podStartE2EDuration="50.138774803s" podCreationTimestamp="2025-09-29 09:57:02 +0000 UTC" firstStartedPulling="2025-09-29 09:57:16.8829573 +0000 UTC m=+1172.738885328" lastFinishedPulling="2025-09-29 09:57:51.107923078 +0000 UTC m=+1206.963851106" observedRunningTime="2025-09-29 09:57:52.130546147 +0000 UTC m=+1207.986474165" watchObservedRunningTime="2025-09-29 09:57:52.138774803 +0000 UTC m=+1207.994702831" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.170699 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.186438 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7fbf9bb75f-prqdm"] Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.500602 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-111f-account-create-njlb9"] Sep 29 09:57:52 crc kubenswrapper[4991]: E0929 09:57:52.501015 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223e3abd-c0d7-4392-8965-368d980373a1" containerName="console" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.501030 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="223e3abd-c0d7-4392-8965-368d980373a1" containerName="console" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.501267 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="223e3abd-c0d7-4392-8965-368d980373a1" containerName="console" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.502186 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.516532 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.529647 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-111f-account-create-njlb9"] Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.541349 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb\") pod \"mysqld-exporter-111f-account-create-njlb9\" (UID: \"eca13a6c-59fc-42e9-b16d-473eaedb72b3\") " pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.643469 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb\") pod \"mysqld-exporter-111f-account-create-njlb9\" (UID: \"eca13a6c-59fc-42e9-b16d-473eaedb72b3\") " pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.665094 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb\") pod \"mysqld-exporter-111f-account-create-njlb9\" (UID: \"eca13a6c-59fc-42e9-b16d-473eaedb72b3\") " pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.831531 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:52 crc kubenswrapper[4991]: I0929 09:57:52.946178 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223e3abd-c0d7-4392-8965-368d980373a1" path="/var/lib/kubelet/pods/223e3abd-c0d7-4392-8965-368d980373a1/volumes" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.265349 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-111f-account-create-njlb9"] Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.641073 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.659538 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777657 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777719 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777785 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777805 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777858 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwm25\" (UniqueName: \"kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25\") pod \"6df4a675-bdfa-4eb3-adc8-5f318c403e1c\" (UID: \"6df4a675-bdfa-4eb3-adc8-5f318c403e1c\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777903 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777937 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.777967 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5bn4\" (UniqueName: \"kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4\") pod \"cf1a0436-f3cf-4582-9634-58f354d9badf\" (UID: \"cf1a0436-f3cf-4582-9634-58f354d9badf\") " Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.779429 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.780700 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.783305 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25" (OuterVolumeSpecName: "kube-api-access-qwm25") pod "6df4a675-bdfa-4eb3-adc8-5f318c403e1c" (UID: "6df4a675-bdfa-4eb3-adc8-5f318c403e1c"). InnerVolumeSpecName "kube-api-access-qwm25". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.793152 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4" (OuterVolumeSpecName: "kube-api-access-z5bn4") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "kube-api-access-z5bn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.793208 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.806727 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.808051 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.811895 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts" (OuterVolumeSpecName: "scripts") pod "cf1a0436-f3cf-4582-9634-58f354d9badf" (UID: "cf1a0436-f3cf-4582-9634-58f354d9badf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.879924 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.879969 4991 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-ring-data-devices\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.879979 4991 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-swiftconf\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.879987 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf1a0436-f3cf-4582-9634-58f354d9badf-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.879997 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwm25\" (UniqueName: \"kubernetes.io/projected/6df4a675-bdfa-4eb3-adc8-5f318c403e1c-kube-api-access-qwm25\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.880007 4991 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cf1a0436-f3cf-4582-9634-58f354d9badf-etc-swift\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.880014 4991 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cf1a0436-f3cf-4582-9634-58f354d9badf-dispersionconf\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:53 crc kubenswrapper[4991]: I0929 09:57:53.880022 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5bn4\" (UniqueName: \"kubernetes.io/projected/cf1a0436-f3cf-4582-9634-58f354d9badf-kube-api-access-z5bn4\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.124046 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z76bp" Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.124047 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z76bp" event={"ID":"6df4a675-bdfa-4eb3-adc8-5f318c403e1c","Type":"ContainerDied","Data":"9244e3852768ad08469464c8cac21022f33f50e206223ff09c3093e3c7eb2d09"} Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.124293 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9244e3852768ad08469464c8cac21022f33f50e206223ff09c3093e3c7eb2d09" Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.125786 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zftlr" Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.125815 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zftlr" event={"ID":"cf1a0436-f3cf-4582-9634-58f354d9badf","Type":"ContainerDied","Data":"c924fbca35a918d757823d66e6050877fcac49198d6e3c073a59545ebbc0971f"} Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.126234 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c924fbca35a918d757823d66e6050877fcac49198d6e3c073a59545ebbc0971f" Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.127082 4991 generic.go:334] "Generic (PLEG): container finished" podID="eca13a6c-59fc-42e9-b16d-473eaedb72b3" containerID="73de9b7e949bec32028473461c7ebae41b00f5e32f3856afc89eb7684243ef89" exitCode=0 Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.127117 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-111f-account-create-njlb9" event={"ID":"eca13a6c-59fc-42e9-b16d-473eaedb72b3","Type":"ContainerDied","Data":"73de9b7e949bec32028473461c7ebae41b00f5e32f3856afc89eb7684243ef89"} Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.127136 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-111f-account-create-njlb9" event={"ID":"eca13a6c-59fc-42e9-b16d-473eaedb72b3","Type":"ContainerStarted","Data":"d500e18864a56356fbe41ca185ff32af647423509b847814ab77eaebd0187c64"} Sep 29 09:57:54 crc kubenswrapper[4991]: I0929 09:57:54.243977 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.619856 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.700989 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.715178 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb\") pod \"eca13a6c-59fc-42e9-b16d-473eaedb72b3\" (UID: \"eca13a6c-59fc-42e9-b16d-473eaedb72b3\") " Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.722305 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb" (OuterVolumeSpecName: "kube-api-access-kkhbb") pod "eca13a6c-59fc-42e9-b16d-473eaedb72b3" (UID: "eca13a6c-59fc-42e9-b16d-473eaedb72b3"). InnerVolumeSpecName "kube-api-access-kkhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.809178 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zmljr" podUID="d70600eb-2ef3-4db2-a1e6-050e76f25e79" containerName="ovn-controller" probeResult="failure" output=< Sep 29 09:57:55 crc kubenswrapper[4991]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Sep 29 09:57:55 crc kubenswrapper[4991]: > Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.818267 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/eca13a6c-59fc-42e9-b16d-473eaedb72b3-kube-api-access-kkhbb\") on node \"crc\" DevicePath \"\"" Sep 29 09:57:55 crc kubenswrapper[4991]: I0929 09:57:55.843825 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:57:56 crc kubenswrapper[4991]: I0929 09:57:56.152919 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-111f-account-create-njlb9" event={"ID":"eca13a6c-59fc-42e9-b16d-473eaedb72b3","Type":"ContainerDied","Data":"d500e18864a56356fbe41ca185ff32af647423509b847814ab77eaebd0187c64"} Sep 29 09:57:56 crc kubenswrapper[4991]: I0929 09:57:56.152997 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-111f-account-create-njlb9" Sep 29 09:57:56 crc kubenswrapper[4991]: I0929 09:57:56.153026 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d500e18864a56356fbe41ca185ff32af647423509b847814ab77eaebd0187c64" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.428187 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.737225 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gwqgd"] Sep 29 09:57:57 crc kubenswrapper[4991]: E0929 09:57:57.737732 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca13a6c-59fc-42e9-b16d-473eaedb72b3" containerName="mariadb-account-create" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.737758 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca13a6c-59fc-42e9-b16d-473eaedb72b3" containerName="mariadb-account-create" Sep 29 09:57:57 crc kubenswrapper[4991]: E0929 09:57:57.737803 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1a0436-f3cf-4582-9634-58f354d9badf" containerName="swift-ring-rebalance" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.737815 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1a0436-f3cf-4582-9634-58f354d9badf" containerName="swift-ring-rebalance" Sep 29 09:57:57 crc kubenswrapper[4991]: E0929 09:57:57.737846 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df4a675-bdfa-4eb3-adc8-5f318c403e1c" containerName="mariadb-database-create" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.737855 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df4a675-bdfa-4eb3-adc8-5f318c403e1c" containerName="mariadb-database-create" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.738136 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca13a6c-59fc-42e9-b16d-473eaedb72b3" containerName="mariadb-account-create" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.738178 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1a0436-f3cf-4582-9634-58f354d9badf" containerName="swift-ring-rebalance" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.738191 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df4a675-bdfa-4eb3-adc8-5f318c403e1c" containerName="mariadb-database-create" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.739900 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwqgd" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.752659 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gwqgd"] Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.765154 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.833016 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-szgsx"] Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.834909 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-szgsx" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.855453 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvg4\" (UniqueName: \"kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4\") pod \"cinder-db-create-gwqgd\" (UID: \"8f450a4b-547e-4bb1-9548-9223070d3006\") " pod="openstack/cinder-db-create-gwqgd" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.898036 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-szgsx"] Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.957910 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczwq\" (UniqueName: \"kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq\") pod \"barbican-db-create-szgsx\" (UID: \"93ae0933-0cd5-4927-9bdd-d24f1c9055d5\") " pod="openstack/barbican-db-create-szgsx" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.958259 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvg4\" (UniqueName: \"kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4\") pod \"cinder-db-create-gwqgd\" (UID: \"8f450a4b-547e-4bb1-9548-9223070d3006\") " pod="openstack/cinder-db-create-gwqgd" Sep 29 09:57:57 crc kubenswrapper[4991]: I0929 09:57:57.982451 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvg4\" (UniqueName: \"kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4\") pod \"cinder-db-create-gwqgd\" (UID: \"8f450a4b-547e-4bb1-9548-9223070d3006\") " pod="openstack/cinder-db-create-gwqgd" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.029476 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-fnx9p"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.035761 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fnx9p" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.047128 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fnx9p"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.065298 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwqgd" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.066223 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sczwq\" (UniqueName: \"kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq\") pod \"barbican-db-create-szgsx\" (UID: \"93ae0933-0cd5-4927-9bdd-d24f1c9055d5\") " pod="openstack/barbican-db-create-szgsx" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.089298 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sczwq\" (UniqueName: \"kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq\") pod \"barbican-db-create-szgsx\" (UID: \"93ae0933-0cd5-4927-9bdd-d24f1c9055d5\") " pod="openstack/barbican-db-create-szgsx" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.170052 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2bs\" (UniqueName: \"kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs\") pod \"heat-db-create-fnx9p\" (UID: \"c6ef1f45-79ae-4826-910e-11aa5d94faaa\") " pod="openstack/heat-db-create-fnx9p" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.199840 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-szgsx" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.272800 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-28jfp"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.274272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2bs\" (UniqueName: \"kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs\") pod \"heat-db-create-fnx9p\" (UID: \"c6ef1f45-79ae-4826-910e-11aa5d94faaa\") " pod="openstack/heat-db-create-fnx9p" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.274572 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.287617 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-28jfp"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.311194 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2bs\" (UniqueName: \"kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs\") pod \"heat-db-create-fnx9p\" (UID: \"c6ef1f45-79ae-4826-910e-11aa5d94faaa\") " pod="openstack/heat-db-create-fnx9p" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.377692 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bwbn\" (UniqueName: \"kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn\") pod \"mysqld-exporter-openstack-cell1-db-create-28jfp\" (UID: \"532938d6-313b-40f5-805c-a28638a8dd57\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.385673 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fnx9p" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.406559 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fk2js"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.408178 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fk2js" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.429851 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fk2js"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.479625 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vxb\" (UniqueName: \"kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb\") pod \"neutron-db-create-fk2js\" (UID: \"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47\") " pod="openstack/neutron-db-create-fk2js" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.480171 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bwbn\" (UniqueName: \"kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn\") pod \"mysqld-exporter-openstack-cell1-db-create-28jfp\" (UID: \"532938d6-313b-40f5-805c-a28638a8dd57\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.502148 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bwbn\" (UniqueName: \"kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn\") pod \"mysqld-exporter-openstack-cell1-db-create-28jfp\" (UID: \"532938d6-313b-40f5-805c-a28638a8dd57\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.582672 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7vxb\" (UniqueName: \"kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb\") pod \"neutron-db-create-fk2js\" (UID: \"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47\") " pod="openstack/neutron-db-create-fk2js" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.600201 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7vxb\" (UniqueName: \"kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb\") pod \"neutron-db-create-fk2js\" (UID: \"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47\") " pod="openstack/neutron-db-create-fk2js" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.627118 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.673428 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gwqgd"] Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.740522 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fk2js" Sep 29 09:57:58 crc kubenswrapper[4991]: I0929 09:57:58.865259 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-szgsx"] Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.184295 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-fnx9p"] Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.266581 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwqgd" event={"ID":"8f450a4b-547e-4bb1-9548-9223070d3006","Type":"ContainerStarted","Data":"3115eeb73cd53888fca857d5084d4680aed08cacf5c54762d05539d59cc7537a"} Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.266828 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwqgd" event={"ID":"8f450a4b-547e-4bb1-9548-9223070d3006","Type":"ContainerStarted","Data":"830f1a85f1402aa3fb115770b8eae0c090673b942516cda54d7a822272ea60d8"} Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.273606 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-szgsx" event={"ID":"93ae0933-0cd5-4927-9bdd-d24f1c9055d5","Type":"ContainerStarted","Data":"498322194bf59848464bc2266276127eb6affbd5d5a17abbb4a40d8438694d3d"} Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.273640 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-szgsx" event={"ID":"93ae0933-0cd5-4927-9bdd-d24f1c9055d5","Type":"ContainerStarted","Data":"07ea7d6755ec35f367be593bfb17a99c919b5f6a53943b040df940dd74ba59e5"} Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.277451 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fnx9p" event={"ID":"c6ef1f45-79ae-4826-910e-11aa5d94faaa","Type":"ContainerStarted","Data":"aac982d470c06223d9bd1c91bd54e499f9702e345027ac179e04eec36d99afd3"} Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.301976 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-gwqgd" podStartSLOduration=2.301939058 podStartE2EDuration="2.301939058s" podCreationTimestamp="2025-09-29 09:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:59.284824048 +0000 UTC m=+1215.140752076" watchObservedRunningTime="2025-09-29 09:57:59.301939058 +0000 UTC m=+1215.157867086" Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.352090 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-szgsx" podStartSLOduration=2.352071305 podStartE2EDuration="2.352071305s" podCreationTimestamp="2025-09-29 09:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:57:59.3362608 +0000 UTC m=+1215.192188828" watchObservedRunningTime="2025-09-29 09:57:59.352071305 +0000 UTC m=+1215.207999323" Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.371827 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-28jfp"] Sep 29 09:57:59 crc kubenswrapper[4991]: I0929 09:57:59.555998 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fk2js"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.069641 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74ef-account-create-kwf9t"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.071193 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.072905 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.081694 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74ef-account-create-kwf9t"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.130297 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pswvz\" (UniqueName: \"kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz\") pod \"keystone-74ef-account-create-kwf9t\" (UID: \"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537\") " pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.232874 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pswvz\" (UniqueName: \"kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz\") pod \"keystone-74ef-account-create-kwf9t\" (UID: \"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537\") " pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.263042 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pswvz\" (UniqueName: \"kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz\") pod \"keystone-74ef-account-create-kwf9t\" (UID: \"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537\") " pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.292136 4991 generic.go:334] "Generic (PLEG): container finished" podID="ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" containerID="3d9965d1e377b0e776bf2947cbfb442890f73429b5544f193c9cb4861d75fba8" exitCode=0 Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.292192 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fk2js" event={"ID":"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47","Type":"ContainerDied","Data":"3d9965d1e377b0e776bf2947cbfb442890f73429b5544f193c9cb4861d75fba8"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.292219 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fk2js" event={"ID":"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47","Type":"ContainerStarted","Data":"68034482356e23c26d3e319721f12c6e863713746625f3da7483cce0f7c19c97"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.294124 4991 generic.go:334] "Generic (PLEG): container finished" podID="c6ef1f45-79ae-4826-910e-11aa5d94faaa" containerID="339337ea580e805337b0cd00330b8cd881d3bde7b012dcce4bd03795053de12a" exitCode=0 Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.294201 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fnx9p" event={"ID":"c6ef1f45-79ae-4826-910e-11aa5d94faaa","Type":"ContainerDied","Data":"339337ea580e805337b0cd00330b8cd881d3bde7b012dcce4bd03795053de12a"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.301683 4991 generic.go:334] "Generic (PLEG): container finished" podID="8f450a4b-547e-4bb1-9548-9223070d3006" containerID="3115eeb73cd53888fca857d5084d4680aed08cacf5c54762d05539d59cc7537a" exitCode=0 Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.301815 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwqgd" event={"ID":"8f450a4b-547e-4bb1-9548-9223070d3006","Type":"ContainerDied","Data":"3115eeb73cd53888fca857d5084d4680aed08cacf5c54762d05539d59cc7537a"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.313763 4991 generic.go:334] "Generic (PLEG): container finished" podID="93ae0933-0cd5-4927-9bdd-d24f1c9055d5" containerID="498322194bf59848464bc2266276127eb6affbd5d5a17abbb4a40d8438694d3d" exitCode=0 Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.313867 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-szgsx" event={"ID":"93ae0933-0cd5-4927-9bdd-d24f1c9055d5","Type":"ContainerDied","Data":"498322194bf59848464bc2266276127eb6affbd5d5a17abbb4a40d8438694d3d"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.316898 4991 generic.go:334] "Generic (PLEG): container finished" podID="532938d6-313b-40f5-805c-a28638a8dd57" containerID="98eb2231636c9b99a852c481f6fc94be3ecff34bdc907d11435cc6ff51b0c778" exitCode=0 Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.316943 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" event={"ID":"532938d6-313b-40f5-805c-a28638a8dd57","Type":"ContainerDied","Data":"98eb2231636c9b99a852c481f6fc94be3ecff34bdc907d11435cc6ff51b0c778"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.316988 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" event={"ID":"532938d6-313b-40f5-805c-a28638a8dd57","Type":"ContainerStarted","Data":"c59a5f5e08cac4e465345cf0401ffee3068ce60e5e185afcb00ed2a989a80906"} Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.350565 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b2dd-account-create-pbwxr"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.364055 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.370987 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.390020 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b2dd-account-create-pbwxr"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.430725 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.436625 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzt84\" (UniqueName: \"kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84\") pod \"placement-b2dd-account-create-pbwxr\" (UID: \"9e8fef1c-2287-4588-9ffe-09515c193ffc\") " pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.560719 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzt84\" (UniqueName: \"kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84\") pod \"placement-b2dd-account-create-pbwxr\" (UID: \"9e8fef1c-2287-4588-9ffe-09515c193ffc\") " pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.637822 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzt84\" (UniqueName: \"kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84\") pod \"placement-b2dd-account-create-pbwxr\" (UID: \"9e8fef1c-2287-4588-9ffe-09515c193ffc\") " pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.697992 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-23e2-account-create-2lv8j"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.699248 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.719707 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.735880 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-23e2-account-create-2lv8j"] Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.762377 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.772516 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slndt\" (UniqueName: \"kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt\") pod \"glance-23e2-account-create-2lv8j\" (UID: \"743f2d68-816c-4cc9-8d79-f2296fa2b7f1\") " pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.875062 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slndt\" (UniqueName: \"kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt\") pod \"glance-23e2-account-create-2lv8j\" (UID: \"743f2d68-816c-4cc9-8d79-f2296fa2b7f1\") " pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.896995 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zmljr" podUID="d70600eb-2ef3-4db2-a1e6-050e76f25e79" containerName="ovn-controller" probeResult="failure" output=< Sep 29 09:58:00 crc kubenswrapper[4991]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Sep 29 09:58:00 crc kubenswrapper[4991]: > Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.910618 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slndt\" (UniqueName: \"kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt\") pod \"glance-23e2-account-create-2lv8j\" (UID: \"743f2d68-816c-4cc9-8d79-f2296fa2b7f1\") " pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:00 crc kubenswrapper[4991]: I0929 09:58:00.942876 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rr8td" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.058383 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.175426 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zmljr-config-gnbrw"] Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.176766 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.179395 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.189868 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-gnbrw"] Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.235140 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74ef-account-create-kwf9t"] Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.281752 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.281829 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.281918 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.281984 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.282010 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.282027 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhz5\" (UniqueName: \"kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.341186 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74ef-account-create-kwf9t" event={"ID":"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537","Type":"ContainerStarted","Data":"378d9ac231e218f9743a352f10fdd4b3b52839e992cd4b85c153717c7e219a2f"} Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387038 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387080 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387103 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhz5\" (UniqueName: \"kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387138 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387187 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387276 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387537 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.387886 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.389647 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.416443 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhz5\" (UniqueName: \"kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5\") pod \"ovn-controller-zmljr-config-gnbrw\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.499374 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b2dd-account-create-pbwxr"] Sep 29 09:58:01 crc kubenswrapper[4991]: W0929 09:58:01.504510 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e8fef1c_2287_4588_9ffe_09515c193ffc.slice/crio-59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b WatchSource:0}: Error finding container 59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b: Status 404 returned error can't find the container with id 59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b Sep 29 09:58:01 crc kubenswrapper[4991]: I0929 09:58:01.526931 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.175153 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-23e2-account-create-2lv8j"] Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.367811 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gwqgd" event={"ID":"8f450a4b-547e-4bb1-9548-9223070d3006","Type":"ContainerDied","Data":"830f1a85f1402aa3fb115770b8eae0c090673b942516cda54d7a822272ea60d8"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.367861 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830f1a85f1402aa3fb115770b8eae0c090673b942516cda54d7a822272ea60d8" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.369467 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-szgsx" event={"ID":"93ae0933-0cd5-4927-9bdd-d24f1c9055d5","Type":"ContainerDied","Data":"07ea7d6755ec35f367be593bfb17a99c919b5f6a53943b040df940dd74ba59e5"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.369519 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07ea7d6755ec35f367be593bfb17a99c919b5f6a53943b040df940dd74ba59e5" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.370883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" event={"ID":"532938d6-313b-40f5-805c-a28638a8dd57","Type":"ContainerDied","Data":"c59a5f5e08cac4e465345cf0401ffee3068ce60e5e185afcb00ed2a989a80906"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.370933 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c59a5f5e08cac4e465345cf0401ffee3068ce60e5e185afcb00ed2a989a80906" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.372393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23e2-account-create-2lv8j" event={"ID":"743f2d68-816c-4cc9-8d79-f2296fa2b7f1","Type":"ContainerStarted","Data":"8024b8ef2740e4af48da062637c20c2dd469f48d1a04967df817d40292c11fe1"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.373754 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fk2js" event={"ID":"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47","Type":"ContainerDied","Data":"68034482356e23c26d3e319721f12c6e863713746625f3da7483cce0f7c19c97"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.373772 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68034482356e23c26d3e319721f12c6e863713746625f3da7483cce0f7c19c97" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.374914 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-fnx9p" event={"ID":"c6ef1f45-79ae-4826-910e-11aa5d94faaa","Type":"ContainerDied","Data":"aac982d470c06223d9bd1c91bd54e499f9702e345027ac179e04eec36d99afd3"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.374932 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac982d470c06223d9bd1c91bd54e499f9702e345027ac179e04eec36d99afd3" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.376173 4991 generic.go:334] "Generic (PLEG): container finished" podID="9e8fef1c-2287-4588-9ffe-09515c193ffc" containerID="62478d81ef8a313b51d58a7f1d0a152619e60def8a7ef70d4bce76356659983b" exitCode=0 Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.376201 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b2dd-account-create-pbwxr" event={"ID":"9e8fef1c-2287-4588-9ffe-09515c193ffc","Type":"ContainerDied","Data":"62478d81ef8a313b51d58a7f1d0a152619e60def8a7ef70d4bce76356659983b"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.376234 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b2dd-account-create-pbwxr" event={"ID":"9e8fef1c-2287-4588-9ffe-09515c193ffc","Type":"ContainerStarted","Data":"59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.377843 4991 generic.go:334] "Generic (PLEG): container finished" podID="c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" containerID="aa794312dc481393ee7fb8e3fd8f2bddf030c8fb6e8d622c8a62596df0ac13e1" exitCode=0 Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.377878 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74ef-account-create-kwf9t" event={"ID":"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537","Type":"ContainerDied","Data":"aa794312dc481393ee7fb8e3fd8f2bddf030c8fb6e8d622c8a62596df0ac13e1"} Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.429315 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-gnbrw"] Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.461660 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fnx9p" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.476284 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwqgd" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.491596 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-szgsx" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.567414 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.567474 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj2bs\" (UniqueName: \"kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs\") pod \"c6ef1f45-79ae-4826-910e-11aa5d94faaa\" (UID: \"c6ef1f45-79ae-4826-910e-11aa5d94faaa\") " Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.567519 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sczwq\" (UniqueName: \"kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq\") pod \"93ae0933-0cd5-4927-9bdd-d24f1c9055d5\" (UID: \"93ae0933-0cd5-4927-9bdd-d24f1c9055d5\") " Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.567595 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcvg4\" (UniqueName: \"kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4\") pod \"8f450a4b-547e-4bb1-9548-9223070d3006\" (UID: \"8f450a4b-547e-4bb1-9548-9223070d3006\") " Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.578595 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fk2js" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.639275 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4" (OuterVolumeSpecName: "kube-api-access-gcvg4") pod "8f450a4b-547e-4bb1-9548-9223070d3006" (UID: "8f450a4b-547e-4bb1-9548-9223070d3006"). InnerVolumeSpecName "kube-api-access-gcvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.679177 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs" (OuterVolumeSpecName: "kube-api-access-pj2bs") pod "c6ef1f45-79ae-4826-910e-11aa5d94faaa" (UID: "c6ef1f45-79ae-4826-910e-11aa5d94faaa"). InnerVolumeSpecName "kube-api-access-pj2bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.688303 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq" (OuterVolumeSpecName: "kube-api-access-sczwq") pod "93ae0933-0cd5-4927-9bdd-d24f1c9055d5" (UID: "93ae0933-0cd5-4927-9bdd-d24f1c9055d5"). InnerVolumeSpecName "kube-api-access-sczwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.690996 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcvg4\" (UniqueName: \"kubernetes.io/projected/8f450a4b-547e-4bb1-9548-9223070d3006-kube-api-access-gcvg4\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.795940 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bwbn\" (UniqueName: \"kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn\") pod \"532938d6-313b-40f5-805c-a28638a8dd57\" (UID: \"532938d6-313b-40f5-805c-a28638a8dd57\") " Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.796048 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7vxb\" (UniqueName: \"kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb\") pod \"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47\" (UID: \"ebea7d48-54c1-4c19-aaa0-139a6d5d6b47\") " Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.796862 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj2bs\" (UniqueName: \"kubernetes.io/projected/c6ef1f45-79ae-4826-910e-11aa5d94faaa-kube-api-access-pj2bs\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.796882 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sczwq\" (UniqueName: \"kubernetes.io/projected/93ae0933-0cd5-4927-9bdd-d24f1c9055d5-kube-api-access-sczwq\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.813716 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb" (OuterVolumeSpecName: "kube-api-access-n7vxb") pod "ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" (UID: "ebea7d48-54c1-4c19-aaa0-139a6d5d6b47"). InnerVolumeSpecName "kube-api-access-n7vxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.815865 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn" (OuterVolumeSpecName: "kube-api-access-2bwbn") pod "532938d6-313b-40f5-805c-a28638a8dd57" (UID: "532938d6-313b-40f5-805c-a28638a8dd57"). InnerVolumeSpecName "kube-api-access-2bwbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.898702 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bwbn\" (UniqueName: \"kubernetes.io/projected/532938d6-313b-40f5-805c-a28638a8dd57-kube-api-access-2bwbn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:02 crc kubenswrapper[4991]: I0929 09:58:02.898914 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7vxb\" (UniqueName: \"kubernetes.io/projected/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47-kube-api-access-n7vxb\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.396190 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-gnbrw" event={"ID":"3e056662-2837-4b3e-9f12-f5dcf366bd62","Type":"ContainerStarted","Data":"a554143176e372518bbb5dce50edd9c99c582433babaa80d985d2c75bc3057b5"} Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.397370 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-gnbrw" event={"ID":"3e056662-2837-4b3e-9f12-f5dcf366bd62","Type":"ContainerStarted","Data":"ecc4fb9091f144c0c3e28b19ef8e3620f72760a29f971cc8bfbd00b5d836bdf4"} Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.400940 4991 generic.go:334] "Generic (PLEG): container finished" podID="743f2d68-816c-4cc9-8d79-f2296fa2b7f1" containerID="1d4196330c8eec3b34706e9d3a7ed1b2085e90065193d9c66a1ee376d1abc98f" exitCode=0 Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.401621 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-28jfp" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.402178 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23e2-account-create-2lv8j" event={"ID":"743f2d68-816c-4cc9-8d79-f2296fa2b7f1","Type":"ContainerDied","Data":"1d4196330c8eec3b34706e9d3a7ed1b2085e90065193d9c66a1ee376d1abc98f"} Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.402303 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-szgsx" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.404155 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-fnx9p" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.404193 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gwqgd" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.404224 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fk2js" Sep 29 09:58:03 crc kubenswrapper[4991]: I0929 09:58:03.421839 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zmljr-config-gnbrw" podStartSLOduration=2.421820044 podStartE2EDuration="2.421820044s" podCreationTimestamp="2025-09-29 09:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:03.413031093 +0000 UTC m=+1219.268959281" watchObservedRunningTime="2025-09-29 09:58:03.421820044 +0000 UTC m=+1219.277748072" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.107424 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.112702 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.245731 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pswvz\" (UniqueName: \"kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz\") pod \"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537\" (UID: \"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537\") " Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.246168 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzt84\" (UniqueName: \"kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84\") pod \"9e8fef1c-2287-4588-9ffe-09515c193ffc\" (UID: \"9e8fef1c-2287-4588-9ffe-09515c193ffc\") " Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.250328 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.278396 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz" (OuterVolumeSpecName: "kube-api-access-pswvz") pod "c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" (UID: "c8370d2d-c4e2-4a53-b2b7-a0fea31e4537"). InnerVolumeSpecName "kube-api-access-pswvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.285617 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.292907 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84" (OuterVolumeSpecName: "kube-api-access-nzt84") pod "9e8fef1c-2287-4588-9ffe-09515c193ffc" (UID: "9e8fef1c-2287-4588-9ffe-09515c193ffc"). InnerVolumeSpecName "kube-api-access-nzt84". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.349010 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pswvz\" (UniqueName: \"kubernetes.io/projected/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537-kube-api-access-pswvz\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.349242 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzt84\" (UniqueName: \"kubernetes.io/projected/9e8fef1c-2287-4588-9ffe-09515c193ffc-kube-api-access-nzt84\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.414047 4991 generic.go:334] "Generic (PLEG): container finished" podID="3e056662-2837-4b3e-9f12-f5dcf366bd62" containerID="a554143176e372518bbb5dce50edd9c99c582433babaa80d985d2c75bc3057b5" exitCode=0 Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.414124 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-gnbrw" event={"ID":"3e056662-2837-4b3e-9f12-f5dcf366bd62","Type":"ContainerDied","Data":"a554143176e372518bbb5dce50edd9c99c582433babaa80d985d2c75bc3057b5"} Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.416325 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74ef-account-create-kwf9t" event={"ID":"c8370d2d-c4e2-4a53-b2b7-a0fea31e4537","Type":"ContainerDied","Data":"378d9ac231e218f9743a352f10fdd4b3b52839e992cd4b85c153717c7e219a2f"} Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.416375 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378d9ac231e218f9743a352f10fdd4b3b52839e992cd4b85c153717c7e219a2f" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.416349 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74ef-account-create-kwf9t" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.418099 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b2dd-account-create-pbwxr" event={"ID":"9e8fef1c-2287-4588-9ffe-09515c193ffc","Type":"ContainerDied","Data":"59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b"} Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.418210 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59fe752b8c7eb90b88b7be280edaedbcb669de9cebf5972ea0d2c623b0051c9b" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.418111 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b2dd-account-create-pbwxr" Sep 29 09:58:04 crc kubenswrapper[4991]: I0929 09:58:04.419936 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.032371 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.179322 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slndt\" (UniqueName: \"kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt\") pod \"743f2d68-816c-4cc9-8d79-f2296fa2b7f1\" (UID: \"743f2d68-816c-4cc9-8d79-f2296fa2b7f1\") " Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.185766 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt" (OuterVolumeSpecName: "kube-api-access-slndt") pod "743f2d68-816c-4cc9-8d79-f2296fa2b7f1" (UID: "743f2d68-816c-4cc9-8d79-f2296fa2b7f1"). InnerVolumeSpecName "kube-api-access-slndt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.281649 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slndt\" (UniqueName: \"kubernetes.io/projected/743f2d68-816c-4cc9-8d79-f2296fa2b7f1-kube-api-access-slndt\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.428288 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23e2-account-create-2lv8j" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.428347 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23e2-account-create-2lv8j" event={"ID":"743f2d68-816c-4cc9-8d79-f2296fa2b7f1","Type":"ContainerDied","Data":"8024b8ef2740e4af48da062637c20c2dd469f48d1a04967df817d40292c11fe1"} Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.428378 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8024b8ef2740e4af48da062637c20c2dd469f48d1a04967df817d40292c11fe1" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.803042 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.829426 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/81d702cb-530c-441d-b686-f205337a2aea-etc-swift\") pod \"swift-storage-0\" (UID: \"81d702cb-530c-441d-b686-f205337a2aea\") " pod="openstack/swift-storage-0" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.891136 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zmljr" Sep 29 09:58:05 crc kubenswrapper[4991]: I0929 09:58:05.969593 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.074898 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.213553 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhz5\" (UniqueName: \"kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.213815 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.213875 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.213982 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.214011 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.214125 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run\") pod \"3e056662-2837-4b3e-9f12-f5dcf366bd62\" (UID: \"3e056662-2837-4b3e-9f12-f5dcf366bd62\") " Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.214510 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run" (OuterVolumeSpecName: "var-run") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.217298 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.218012 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.218018 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.218156 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts" (OuterVolumeSpecName: "scripts") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.222691 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5" (OuterVolumeSpecName: "kube-api-access-4rhz5") pod "3e056662-2837-4b3e-9f12-f5dcf366bd62" (UID: "3e056662-2837-4b3e-9f12-f5dcf366bd62"). InnerVolumeSpecName "kube-api-access-4rhz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315838 4991 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315867 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhz5\" (UniqueName: \"kubernetes.io/projected/3e056662-2837-4b3e-9f12-f5dcf366bd62-kube-api-access-4rhz5\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315878 4991 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315888 4991 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e056662-2837-4b3e-9f12-f5dcf366bd62-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315896 4991 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.315904 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e056662-2837-4b3e-9f12-f5dcf366bd62-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.439298 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-gnbrw" event={"ID":"3e056662-2837-4b3e-9f12-f5dcf366bd62","Type":"ContainerDied","Data":"ecc4fb9091f144c0c3e28b19ef8e3620f72760a29f971cc8bfbd00b5d836bdf4"} Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.439337 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc4fb9091f144c0c3e28b19ef8e3620f72760a29f971cc8bfbd00b5d836bdf4" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.439388 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-gnbrw" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.570087 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zmljr-config-gnbrw"] Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.580960 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zmljr-config-gnbrw"] Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.632549 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zmljr-config-sqlm7"] Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633042 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633058 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633073 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f450a4b-547e-4bb1-9548-9223070d3006" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633080 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f450a4b-547e-4bb1-9548-9223070d3006" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633094 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ef1f45-79ae-4826-910e-11aa5d94faaa" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633101 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ef1f45-79ae-4826-910e-11aa5d94faaa" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633111 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8fef1c-2287-4588-9ffe-09515c193ffc" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633117 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8fef1c-2287-4588-9ffe-09515c193ffc" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633128 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743f2d68-816c-4cc9-8d79-f2296fa2b7f1" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633133 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="743f2d68-816c-4cc9-8d79-f2296fa2b7f1" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633142 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633149 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633162 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ae0933-0cd5-4927-9bdd-d24f1c9055d5" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633167 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ae0933-0cd5-4927-9bdd-d24f1c9055d5" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633178 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532938d6-313b-40f5-805c-a28638a8dd57" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633185 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="532938d6-313b-40f5-805c-a28638a8dd57" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: E0929 09:58:06.633199 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e056662-2837-4b3e-9f12-f5dcf366bd62" containerName="ovn-config" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633204 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e056662-2837-4b3e-9f12-f5dcf366bd62" containerName="ovn-config" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633385 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633402 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f450a4b-547e-4bb1-9548-9223070d3006" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633409 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ae0933-0cd5-4927-9bdd-d24f1c9055d5" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633423 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="532938d6-313b-40f5-805c-a28638a8dd57" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633437 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ef1f45-79ae-4826-910e-11aa5d94faaa" containerName="mariadb-database-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633445 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e056662-2837-4b3e-9f12-f5dcf366bd62" containerName="ovn-config" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633453 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8fef1c-2287-4588-9ffe-09515c193ffc" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633465 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="743f2d68-816c-4cc9-8d79-f2296fa2b7f1" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.633480 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" containerName="mariadb-account-create" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.634243 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.636346 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.645200 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-sqlm7"] Sep 29 09:58:06 crc kubenswrapper[4991]: W0929 09:58:06.655123 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81d702cb_530c_441d_b686_f205337a2aea.slice/crio-e8844433393f0a2c69e8cc67dc10934285766473bd5c176152947a3d2f57cdcd WatchSource:0}: Error finding container e8844433393f0a2c69e8cc67dc10934285766473bd5c176152947a3d2f57cdcd: Status 404 returned error can't find the container with id e8844433393f0a2c69e8cc67dc10934285766473bd5c176152947a3d2f57cdcd Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.658072 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.658770 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.721425 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.721768 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.721823 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.722256 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.722450 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.722536 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4gd9\" (UniqueName: \"kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.824774 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.824853 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.824893 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4gd9\" (UniqueName: \"kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.824934 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.824979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.825017 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.825197 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.825223 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.825250 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.825884 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.827070 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.853503 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4gd9\" (UniqueName: \"kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9\") pod \"ovn-controller-zmljr-config-sqlm7\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.938192 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e056662-2837-4b3e-9f12-f5dcf366bd62" path="/var/lib/kubelet/pods/3e056662-2837-4b3e-9f12-f5dcf366bd62/volumes" Sep 29 09:58:06 crc kubenswrapper[4991]: I0929 09:58:06.953332 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.199501 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.200481 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="thanos-sidecar" containerID="cri-o://de193f104eb0ab4333fbe537969f6a2ebc8e315982d7c52923dfc4677fdb7ead" gracePeriod=600 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.200577 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="config-reloader" containerID="cri-o://b9ef7a4660e829242151b71b233301094242a3d1268bc0f4a1d4b98103f458f0" gracePeriod=600 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.201487 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="prometheus" containerID="cri-o://2d248fc94532dddc4c4a2b0944496453e7c81158f64bd213dd5d049621855747" gracePeriod=600 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.462751 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"e8844433393f0a2c69e8cc67dc10934285766473bd5c176152947a3d2f57cdcd"} Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468448 4991 generic.go:334] "Generic (PLEG): container finished" podID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerID="de193f104eb0ab4333fbe537969f6a2ebc8e315982d7c52923dfc4677fdb7ead" exitCode=0 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468711 4991 generic.go:334] "Generic (PLEG): container finished" podID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerID="b9ef7a4660e829242151b71b233301094242a3d1268bc0f4a1d4b98103f458f0" exitCode=0 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468727 4991 generic.go:334] "Generic (PLEG): container finished" podID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerID="2d248fc94532dddc4c4a2b0944496453e7c81158f64bd213dd5d049621855747" exitCode=0 Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468716 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerDied","Data":"de193f104eb0ab4333fbe537969f6a2ebc8e315982d7c52923dfc4677fdb7ead"} Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468772 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerDied","Data":"b9ef7a4660e829242151b71b233301094242a3d1268bc0f4a1d4b98103f458f0"} Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.468787 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerDied","Data":"2d248fc94532dddc4c4a2b0944496453e7c81158f64bd213dd5d049621855747"} Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.491223 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-sqlm7"] Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.946965 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:58:07 crc kubenswrapper[4991]: I0929 09:58:07.947029 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.199486 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266579 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266723 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266789 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266824 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.266917 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.267011 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmq6v\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.267075 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config\") pod \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\" (UID: \"c34c9dcd-aa7a-4cf8-9295-fb6d85190123\") " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.267861 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.268458 4991 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.274103 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out" (OuterVolumeSpecName: "config-out") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.274713 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.276310 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config" (OuterVolumeSpecName: "config") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.283071 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.283126 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v" (OuterVolumeSpecName: "kube-api-access-fmq6v") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "kube-api-access-fmq6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.305795 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "pvc-75025658-19d5-4acf-8d8e-ff5ece66444e". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.326755 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config" (OuterVolumeSpecName: "web-config") pod "c34c9dcd-aa7a-4cf8-9295-fb6d85190123" (UID: "c34c9dcd-aa7a-4cf8-9295-fb6d85190123"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.370635 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") on node \"crc\" " Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371246 4991 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config-out\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371264 4991 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-web-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371275 4991 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-tls-assets\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371288 4991 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371301 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmq6v\" (UniqueName: \"kubernetes.io/projected/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-kube-api-access-fmq6v\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.371313 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c34c9dcd-aa7a-4cf8-9295-fb6d85190123-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.414569 4991 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.414734 4991 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-75025658-19d5-4acf-8d8e-ff5ece66444e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e") on node "crc" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.473028 4991 reconciler_common.go:293] "Volume detached for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.489677 4991 generic.go:334] "Generic (PLEG): container finished" podID="230df798-e7df-41e8-9b6b-3b2f8f929005" containerID="bb4a35b1701bd73c145458207f8e3649e39d503557abd4710d96181c0743d7fe" exitCode=0 Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.489831 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-sqlm7" event={"ID":"230df798-e7df-41e8-9b6b-3b2f8f929005","Type":"ContainerDied","Data":"bb4a35b1701bd73c145458207f8e3649e39d503557abd4710d96181c0743d7fe"} Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.489860 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-sqlm7" event={"ID":"230df798-e7df-41e8-9b6b-3b2f8f929005","Type":"ContainerStarted","Data":"3ddbbac0114942702c440f97707cbbdc29c1b442624d7a5b5b246d72bf813f76"} Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.498646 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c34c9dcd-aa7a-4cf8-9295-fb6d85190123","Type":"ContainerDied","Data":"ae085f0820edb853ce053b88c170a1d847abd48a44efc4feed1011b3a0a4de49"} Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.498702 4991 scope.go:117] "RemoveContainer" containerID="de193f104eb0ab4333fbe537969f6a2ebc8e315982d7c52923dfc4677fdb7ead" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.498746 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.555589 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.566888 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.588117 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:08 crc kubenswrapper[4991]: E0929 09:58:08.588720 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="thanos-sidecar" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.588811 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="thanos-sidecar" Sep 29 09:58:08 crc kubenswrapper[4991]: E0929 09:58:08.588962 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="init-config-reloader" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589043 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="init-config-reloader" Sep 29 09:58:08 crc kubenswrapper[4991]: E0929 09:58:08.589115 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="prometheus" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589181 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="prometheus" Sep 29 09:58:08 crc kubenswrapper[4991]: E0929 09:58:08.589257 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="config-reloader" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589368 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="config-reloader" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589604 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="prometheus" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589687 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="config-reloader" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.589748 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" containerName="thanos-sidecar" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.591636 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.599032 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.601307 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4m5ct" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.603515 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.604991 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.605155 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.606550 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.606714 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.609571 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677295 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677372 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677450 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677475 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c003699b-006e-4a8b-9de5-2a153984ed1a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677527 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677567 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677611 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kxh\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-kube-api-access-d2kxh\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677653 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c003699b-006e-4a8b-9de5-2a153984ed1a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677715 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677777 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.677809 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.755029 4991 scope.go:117] "RemoveContainer" containerID="b9ef7a4660e829242151b71b233301094242a3d1268bc0f4a1d4b98103f458f0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779435 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779489 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779527 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779549 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c003699b-006e-4a8b-9de5-2a153984ed1a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779601 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779631 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779660 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2kxh\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-kube-api-access-d2kxh\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779685 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c003699b-006e-4a8b-9de5-2a153984ed1a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779726 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779769 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.779790 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.781904 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c003699b-006e-4a8b-9de5-2a153984ed1a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.784895 4991 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.784922 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.784936 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e60eb26f743f9cd4188eb4a778b023ea193a20b4b8b4bc41849015e0033fa4b7/globalmount\"" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.785488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.787535 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.792475 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c003699b-006e-4a8b-9de5-2a153984ed1a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.792741 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.792994 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.793012 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-config\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.794360 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c003699b-006e-4a8b-9de5-2a153984ed1a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.805167 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2kxh\" (UniqueName: \"kubernetes.io/projected/c003699b-006e-4a8b-9de5-2a153984ed1a-kube-api-access-d2kxh\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.811840 4991 scope.go:117] "RemoveContainer" containerID="2d248fc94532dddc4c4a2b0944496453e7c81158f64bd213dd5d049621855747" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.844000 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-75025658-19d5-4acf-8d8e-ff5ece66444e\") pod \"prometheus-metric-storage-0\" (UID: \"c003699b-006e-4a8b-9de5-2a153984ed1a\") " pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.846534 4991 scope.go:117] "RemoveContainer" containerID="418ee5bd99b2ddebf9b9385dedbcb7251673173c4e607d6764df201111c856e2" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.913463 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:08 crc kubenswrapper[4991]: I0929 09:58:08.938766 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34c9dcd-aa7a-4cf8-9295-fb6d85190123" path="/var/lib/kubelet/pods/c34c9dcd-aa7a-4cf8-9295-fb6d85190123/volumes" Sep 29 09:58:09 crc kubenswrapper[4991]: I0929 09:58:09.398066 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Sep 29 09:58:09 crc kubenswrapper[4991]: W0929 09:58:09.410887 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc003699b_006e_4a8b_9de5_2a153984ed1a.slice/crio-67ee53b75df25a5803cfb3163b2836a47f9493b3ef53378cc6c554d6c8bfe4dd WatchSource:0}: Error finding container 67ee53b75df25a5803cfb3163b2836a47f9493b3ef53378cc6c554d6c8bfe4dd: Status 404 returned error can't find the container with id 67ee53b75df25a5803cfb3163b2836a47f9493b3ef53378cc6c554d6c8bfe4dd Sep 29 09:58:09 crc kubenswrapper[4991]: I0929 09:58:09.521447 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"5ea85997021b76039f250fd56f4fca4ca442ee058a5f88a32020621911998d5e"} Sep 29 09:58:09 crc kubenswrapper[4991]: I0929 09:58:09.521505 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"df8d2590e4d0aa5bf333a814599685c132a3c87aedbf1ae7afe3585c18f06a48"} Sep 29 09:58:09 crc kubenswrapper[4991]: I0929 09:58:09.524323 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerStarted","Data":"67ee53b75df25a5803cfb3163b2836a47f9493b3ef53378cc6c554d6c8bfe4dd"} Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.030763 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.111888 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112139 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4gd9\" (UniqueName: \"kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112178 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112230 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112264 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112289 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112351 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run" (OuterVolumeSpecName: "var-run") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112395 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn\") pod \"230df798-e7df-41e8-9b6b-3b2f8f929005\" (UID: \"230df798-e7df-41e8-9b6b-3b2f8f929005\") " Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.112969 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.113025 4991 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.113048 4991 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.113083 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.113158 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts" (OuterVolumeSpecName: "scripts") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.118453 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9" (OuterVolumeSpecName: "kube-api-access-w4gd9") pod "230df798-e7df-41e8-9b6b-3b2f8f929005" (UID: "230df798-e7df-41e8-9b6b-3b2f8f929005"). InnerVolumeSpecName "kube-api-access-w4gd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.215112 4991 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230df798-e7df-41e8-9b6b-3b2f8f929005-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.215149 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.215172 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4gd9\" (UniqueName: \"kubernetes.io/projected/230df798-e7df-41e8-9b6b-3b2f8f929005-kube-api-access-w4gd9\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.215181 4991 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/230df798-e7df-41e8-9b6b-3b2f8f929005-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.537761 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"aa109f77eb47a945f861c3ea2fd77609ea44cfb76dd29678eec5d9ece447650d"} Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.537801 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"2fe4014d387e5514e40ee2ec73466e520c43d9aba946d64fcf775d00aa46b66e"} Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.540409 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-sqlm7" event={"ID":"230df798-e7df-41e8-9b6b-3b2f8f929005","Type":"ContainerDied","Data":"3ddbbac0114942702c440f97707cbbdc29c1b442624d7a5b5b246d72bf813f76"} Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.540445 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddbbac0114942702c440f97707cbbdc29c1b442624d7a5b5b246d72bf813f76" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.540517 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-sqlm7" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.740048 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qkb72"] Sep 29 09:58:10 crc kubenswrapper[4991]: E0929 09:58:10.740771 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230df798-e7df-41e8-9b6b-3b2f8f929005" containerName="ovn-config" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.740782 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="230df798-e7df-41e8-9b6b-3b2f8f929005" containerName="ovn-config" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.741003 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="230df798-e7df-41e8-9b6b-3b2f8f929005" containerName="ovn-config" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.741714 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.744182 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.744350 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.744367 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.744586 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rf9rz" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.750110 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qkb72"] Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.831274 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.831470 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.831498 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr85c\" (UniqueName: \"kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.867292 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9kck9"] Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.869274 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.875472 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9kck9"] Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.879043 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.879349 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wkgtn" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.932492 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr85c\" (UniqueName: \"kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.933147 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.933302 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.933537 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.933693 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.934668 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvj4n\" (UniqueName: \"kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.934884 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.939592 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.941348 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:10 crc kubenswrapper[4991]: I0929 09:58:10.955980 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr85c\" (UniqueName: \"kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c\") pod \"keystone-db-sync-qkb72\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.037172 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvj4n\" (UniqueName: \"kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.037724 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.037860 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.038094 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.042382 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.042810 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.043593 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.058614 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvj4n\" (UniqueName: \"kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n\") pod \"glance-db-sync-9kck9\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.073485 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.134282 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zmljr-config-sqlm7"] Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.142163 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zmljr-config-sqlm7"] Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.225253 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zmljr-config-6ztd8"] Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.231585 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.232001 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.236491 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.250971 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-6ztd8"] Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.360493 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.360580 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.360724 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.360826 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj5r\" (UniqueName: \"kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.360855 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.361223 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473263 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473370 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473425 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473446 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473476 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj5r\" (UniqueName: \"kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473503 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473688 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473755 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.473793 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.474529 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.475433 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.491225 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj5r\" (UniqueName: \"kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r\") pod \"ovn-controller-zmljr-config-6ztd8\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:11 crc kubenswrapper[4991]: I0929 09:58:11.577045 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.102764 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qkb72"] Sep 29 09:58:12 crc kubenswrapper[4991]: W0929 09:58:12.107213 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2031f120_9626_495a_b555_e4e960d2e4b1.slice/crio-68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c WatchSource:0}: Error finding container 68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c: Status 404 returned error can't find the container with id 68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.112862 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9kck9"] Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.265361 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-6ztd8"] Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.567557 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-6ztd8" event={"ID":"c3a48cbe-a758-473d-8834-f203eef1ce71","Type":"ContainerStarted","Data":"86fd5ba7798fe5eedfb6bb23fb783fbf461e91460c8f8474a860ad3268d70b42"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.569240 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerStarted","Data":"ffba6edf713c545cdc6775433b884aa8e32d58d0596252cf029137d6ebb15503"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.585549 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qkb72" event={"ID":"28170ca7-9af8-4fdf-a37d-844dba824147","Type":"ContainerStarted","Data":"dfdfc6ae3ea37668092968d637469a664458c5407890616fd74722f6f7c8966f"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.607970 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"fce5df91191c502cc899d3a304b1cac3b58e2c7ec7e95bc23a708e6d9ca10691"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.608011 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"8435b4a287eecafda11e97deb93851988fbc1bfce4b77686f109c7ebb5e7e953"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.608030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"bdbb0bc516e0165ff2fa7bcdb3958ef3360796b2dcf68d5e8737d2fd155a1b23"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.617180 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9kck9" event={"ID":"2031f120-9626-495a-b555-e4e960d2e4b1","Type":"ContainerStarted","Data":"68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c"} Sep 29 09:58:12 crc kubenswrapper[4991]: I0929 09:58:12.951092 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230df798-e7df-41e8-9b6b-3b2f8f929005" path="/var/lib/kubelet/pods/230df798-e7df-41e8-9b6b-3b2f8f929005/volumes" Sep 29 09:58:13 crc kubenswrapper[4991]: I0929 09:58:13.630787 4991 generic.go:334] "Generic (PLEG): container finished" podID="c3a48cbe-a758-473d-8834-f203eef1ce71" containerID="000fd7c46e4f126c63142daeec918131e0d6d75de7c9a3b392267b1845b747cc" exitCode=0 Sep 29 09:58:13 crc kubenswrapper[4991]: I0929 09:58:13.631002 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-6ztd8" event={"ID":"c3a48cbe-a758-473d-8834-f203eef1ce71","Type":"ContainerDied","Data":"000fd7c46e4f126c63142daeec918131e0d6d75de7c9a3b392267b1845b747cc"} Sep 29 09:58:13 crc kubenswrapper[4991]: I0929 09:58:13.638081 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"21c3a2a15bd0a06a3ad37b0b2b328207d135ad58c1181b98a19877536f5e6492"} Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.557156 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675375 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675513 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675653 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675778 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksj5r\" (UniqueName: \"kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675813 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.675892 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn\") pod \"c3a48cbe-a758-473d-8834-f203eef1ce71\" (UID: \"c3a48cbe-a758-473d-8834-f203eef1ce71\") " Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.676426 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.676463 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run" (OuterVolumeSpecName: "var-run") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.677094 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.678619 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.679198 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts" (OuterVolumeSpecName: "scripts") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.688962 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r" (OuterVolumeSpecName: "kube-api-access-ksj5r") pod "c3a48cbe-a758-473d-8834-f203eef1ce71" (UID: "c3a48cbe-a758-473d-8834-f203eef1ce71"). InnerVolumeSpecName "kube-api-access-ksj5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.703783 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"9e788d42498697c703fafff0fbdfb6b6653af7964c20c5fe497513d47d8b44a9"} Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.705556 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-6ztd8" event={"ID":"c3a48cbe-a758-473d-8834-f203eef1ce71","Type":"ContainerDied","Data":"86fd5ba7798fe5eedfb6bb23fb783fbf461e91460c8f8474a860ad3268d70b42"} Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.705651 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fd5ba7798fe5eedfb6bb23fb783fbf461e91460c8f8474a860ad3268d70b42" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.705749 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-6ztd8" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778441 4991 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778480 4991 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778494 4991 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778505 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksj5r\" (UniqueName: \"kubernetes.io/projected/c3a48cbe-a758-473d-8834-f203eef1ce71-kube-api-access-ksj5r\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778519 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3a48cbe-a758-473d-8834-f203eef1ce71-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.778528 4991 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c3a48cbe-a758-473d-8834-f203eef1ce71-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.864423 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1446-account-create-25hvn"] Sep 29 09:58:17 crc kubenswrapper[4991]: E0929 09:58:17.865179 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a48cbe-a758-473d-8834-f203eef1ce71" containerName="ovn-config" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.865200 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a48cbe-a758-473d-8834-f203eef1ce71" containerName="ovn-config" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.865386 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a48cbe-a758-473d-8834-f203eef1ce71" containerName="ovn-config" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.866223 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.868416 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.900064 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1446-account-create-25hvn"] Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.963087 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f564-account-create-kkgjq"] Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.964689 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.968037 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.977718 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f564-account-create-kkgjq"] Sep 29 09:58:17 crc kubenswrapper[4991]: I0929 09:58:17.984512 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qms\" (UniqueName: \"kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms\") pod \"cinder-1446-account-create-25hvn\" (UID: \"43435cee-d9a2-4517-94c4-0e49cdd536a2\") " pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.086641 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glfk\" (UniqueName: \"kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk\") pod \"barbican-f564-account-create-kkgjq\" (UID: \"388a587e-7633-4b2d-b4a4-4a083e06afbf\") " pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.086970 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qms\" (UniqueName: \"kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms\") pod \"cinder-1446-account-create-25hvn\" (UID: \"43435cee-d9a2-4517-94c4-0e49cdd536a2\") " pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.108645 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qms\" (UniqueName: \"kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms\") pod \"cinder-1446-account-create-25hvn\" (UID: \"43435cee-d9a2-4517-94c4-0e49cdd536a2\") " pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.176664 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-bf69-account-create-nrt9x"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.178211 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.188865 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.188880 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glfk\" (UniqueName: \"kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk\") pod \"barbican-f564-account-create-kkgjq\" (UID: \"388a587e-7633-4b2d-b4a4-4a083e06afbf\") " pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.192430 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.210649 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-bf69-account-create-nrt9x"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.218460 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glfk\" (UniqueName: \"kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk\") pod \"barbican-f564-account-create-kkgjq\" (UID: \"388a587e-7633-4b2d-b4a4-4a083e06afbf\") " pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.289515 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.290818 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8m6m\" (UniqueName: \"kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m\") pod \"heat-bf69-account-create-nrt9x\" (UID: \"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed\") " pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.393716 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8m6m\" (UniqueName: \"kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m\") pod \"heat-bf69-account-create-nrt9x\" (UID: \"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed\") " pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.409308 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8m6m\" (UniqueName: \"kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m\") pod \"heat-bf69-account-create-nrt9x\" (UID: \"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed\") " pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.492024 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b99b-account-create-4558r"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.493423 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.498324 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.503612 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b99b-account-create-4558r"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.512683 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.562678 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-2a92-account-create-lsvp7"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.564827 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.567369 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.578114 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2a92-account-create-lsvp7"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.643304 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb82s\" (UniqueName: \"kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s\") pod \"mysqld-exporter-2a92-account-create-lsvp7\" (UID: \"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6\") " pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.643531 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrg7r\" (UniqueName: \"kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r\") pod \"neutron-b99b-account-create-4558r\" (UID: \"7a782a05-3455-4ac2-973c-34e78a249789\") " pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.675782 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zmljr-config-6ztd8"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.684414 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zmljr-config-6ztd8"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.723769 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qkb72" event={"ID":"28170ca7-9af8-4fdf-a37d-844dba824147","Type":"ContainerStarted","Data":"f8a3906b9574253615c2a6b3187f97f066249e24a9588c321efe33824851aaab"} Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.745723 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrg7r\" (UniqueName: \"kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r\") pod \"neutron-b99b-account-create-4558r\" (UID: \"7a782a05-3455-4ac2-973c-34e78a249789\") " pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.746060 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb82s\" (UniqueName: \"kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s\") pod \"mysqld-exporter-2a92-account-create-lsvp7\" (UID: \"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6\") " pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.747628 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qkb72" podStartSLOduration=3.515614671 podStartE2EDuration="8.747612741s" podCreationTimestamp="2025-09-29 09:58:10 +0000 UTC" firstStartedPulling="2025-09-29 09:58:12.123536792 +0000 UTC m=+1227.979464820" lastFinishedPulling="2025-09-29 09:58:17.355534862 +0000 UTC m=+1233.211462890" observedRunningTime="2025-09-29 09:58:18.740473643 +0000 UTC m=+1234.596401671" watchObservedRunningTime="2025-09-29 09:58:18.747612741 +0000 UTC m=+1234.603540769" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.773289 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb82s\" (UniqueName: \"kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s\") pod \"mysqld-exporter-2a92-account-create-lsvp7\" (UID: \"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6\") " pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.777075 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrg7r\" (UniqueName: \"kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r\") pod \"neutron-b99b-account-create-4558r\" (UID: \"7a782a05-3455-4ac2-973c-34e78a249789\") " pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.787129 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zmljr-config-wbq45"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.788656 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.792033 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.807025 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-wbq45"] Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.810115 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.918916 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.939926 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a48cbe-a758-473d-8834-f203eef1ce71" path="/var/lib/kubelet/pods/c3a48cbe-a758-473d-8834-f203eef1ce71/volumes" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.952060 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.953167 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfsw\" (UniqueName: \"kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.953216 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.953390 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.953512 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:18 crc kubenswrapper[4991]: I0929 09:58:18.953568 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055448 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055516 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055556 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055601 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055624 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlfsw\" (UniqueName: \"kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055658 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.055761 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.056453 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.056507 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.056789 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.059862 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.079573 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlfsw\" (UniqueName: \"kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw\") pod \"ovn-controller-zmljr-config-wbq45\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:19 crc kubenswrapper[4991]: I0929 09:58:19.180993 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:20 crc kubenswrapper[4991]: I0929 09:58:20.747555 4991 generic.go:334] "Generic (PLEG): container finished" podID="c003699b-006e-4a8b-9de5-2a153984ed1a" containerID="ffba6edf713c545cdc6775433b884aa8e32d58d0596252cf029137d6ebb15503" exitCode=0 Sep 29 09:58:20 crc kubenswrapper[4991]: I0929 09:58:20.747657 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerDied","Data":"ffba6edf713c545cdc6775433b884aa8e32d58d0596252cf029137d6ebb15503"} Sep 29 09:58:21 crc kubenswrapper[4991]: I0929 09:58:21.772796 4991 generic.go:334] "Generic (PLEG): container finished" podID="28170ca7-9af8-4fdf-a37d-844dba824147" containerID="f8a3906b9574253615c2a6b3187f97f066249e24a9588c321efe33824851aaab" exitCode=0 Sep 29 09:58:21 crc kubenswrapper[4991]: I0929 09:58:21.773128 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qkb72" event={"ID":"28170ca7-9af8-4fdf-a37d-844dba824147","Type":"ContainerDied","Data":"f8a3906b9574253615c2a6b3187f97f066249e24a9588c321efe33824851aaab"} Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.450251 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.643573 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr85c\" (UniqueName: \"kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c\") pod \"28170ca7-9af8-4fdf-a37d-844dba824147\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.643853 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle\") pod \"28170ca7-9af8-4fdf-a37d-844dba824147\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.643910 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data\") pod \"28170ca7-9af8-4fdf-a37d-844dba824147\" (UID: \"28170ca7-9af8-4fdf-a37d-844dba824147\") " Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.648195 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c" (OuterVolumeSpecName: "kube-api-access-qr85c") pod "28170ca7-9af8-4fdf-a37d-844dba824147" (UID: "28170ca7-9af8-4fdf-a37d-844dba824147"). InnerVolumeSpecName "kube-api-access-qr85c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.688388 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28170ca7-9af8-4fdf-a37d-844dba824147" (UID: "28170ca7-9af8-4fdf-a37d-844dba824147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.733342 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data" (OuterVolumeSpecName: "config-data") pod "28170ca7-9af8-4fdf-a37d-844dba824147" (UID: "28170ca7-9af8-4fdf-a37d-844dba824147"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.746175 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.746210 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170ca7-9af8-4fdf-a37d-844dba824147-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.746251 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr85c\" (UniqueName: \"kubernetes.io/projected/28170ca7-9af8-4fdf-a37d-844dba824147-kube-api-access-qr85c\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.837543 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"11ea96f8dca50a9b4876df24b2f0066203dcd47e4c5077bf358145ced3c5138b"} Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.839789 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerStarted","Data":"effb49140a4887e1d921e7a39648cdd9febd81f4d1f9975a88ef61276a2c6c23"} Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.841642 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qkb72" event={"ID":"28170ca7-9af8-4fdf-a37d-844dba824147","Type":"ContainerDied","Data":"dfdfc6ae3ea37668092968d637469a664458c5407890616fd74722f6f7c8966f"} Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.841665 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfdfc6ae3ea37668092968d637469a664458c5407890616fd74722f6f7c8966f" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.841710 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qkb72" Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.859494 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-2a92-account-create-lsvp7"] Sep 29 09:58:26 crc kubenswrapper[4991]: W0929 09:58:26.876131 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fb03f7e_0a1a_42ff_88ef_d0d7c3ba61b6.slice/crio-798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b WatchSource:0}: Error finding container 798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b: Status 404 returned error can't find the container with id 798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b Sep 29 09:58:26 crc kubenswrapper[4991]: I0929 09:58:26.876561 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f564-account-create-kkgjq"] Sep 29 09:58:26 crc kubenswrapper[4991]: W0929 09:58:26.890515 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod388a587e_7633_4b2d_b4a4_4a083e06afbf.slice/crio-20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee WatchSource:0}: Error finding container 20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee: Status 404 returned error can't find the container with id 20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.112254 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1446-account-create-25hvn"] Sep 29 09:58:27 crc kubenswrapper[4991]: W0929 09:58:27.128311 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43435cee_d9a2_4517_94c4_0e49cdd536a2.slice/crio-4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3 WatchSource:0}: Error finding container 4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3: Status 404 returned error can't find the container with id 4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3 Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.128912 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zmljr-config-wbq45"] Sep 29 09:58:27 crc kubenswrapper[4991]: W0929 09:58:27.138358 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e8aa9ff_b96a_499b_b3d6_b744488777a8.slice/crio-46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2 WatchSource:0}: Error finding container 46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2: Status 404 returned error can't find the container with id 46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2 Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.325019 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b99b-account-create-4558r"] Sep 29 09:58:27 crc kubenswrapper[4991]: W0929 09:58:27.330039 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a782a05_3455_4ac2_973c_34e78a249789.slice/crio-a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749 WatchSource:0}: Error finding container a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749: Status 404 returned error can't find the container with id a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749 Sep 29 09:58:27 crc kubenswrapper[4991]: W0929 09:58:27.331849 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc29d8c0_2bb0_40fe_b3f3_98b30719dfed.slice/crio-d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd WatchSource:0}: Error finding container d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd: Status 404 returned error can't find the container with id d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.350533 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-bf69-account-create-nrt9x"] Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.751190 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:27 crc kubenswrapper[4991]: E0929 09:58:27.751779 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28170ca7-9af8-4fdf-a37d-844dba824147" containerName="keystone-db-sync" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.751798 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="28170ca7-9af8-4fdf-a37d-844dba824147" containerName="keystone-db-sync" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.752130 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="28170ca7-9af8-4fdf-a37d-844dba824147" containerName="keystone-db-sync" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.753553 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.760983 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r95jv"] Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.763153 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.771390 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.771611 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.771723 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rf9rz" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.771902 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.775806 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796116 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796459 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wn2\" (UniqueName: \"kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796515 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796585 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796609 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796681 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796859 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dvvg\" (UniqueName: \"kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.796925 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.797028 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.797066 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.797174 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.815745 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r95jv"] Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.869296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"0e96b6e3e6038a31833680ce00b6d7421f3f33635142213a641ba956c1de6a97"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.869342 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"c3ccef07f135fe50a1357ae172e58068fc3695bc7160c9034015e7bf3d50b868"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.872856 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1446-account-create-25hvn" event={"ID":"43435cee-d9a2-4517-94c4-0e49cdd536a2","Type":"ContainerStarted","Data":"4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.878427 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-wbq45" event={"ID":"0e8aa9ff-b96a-499b-b3d6-b744488777a8","Type":"ContainerStarted","Data":"46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.881393 4991 generic.go:334] "Generic (PLEG): container finished" podID="9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" containerID="e0f49985eaf93321f99aaaf779855be0dda5e2ca134d967bdc85d53051fcfa1a" exitCode=0 Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.881449 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" event={"ID":"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6","Type":"ContainerDied","Data":"e0f49985eaf93321f99aaaf779855be0dda5e2ca134d967bdc85d53051fcfa1a"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.881470 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" event={"ID":"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6","Type":"ContainerStarted","Data":"798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.885977 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bf69-account-create-nrt9x" event={"ID":"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed","Type":"ContainerStarted","Data":"d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.912518 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.913628 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914393 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wn2\" (UniqueName: \"kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914491 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914600 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914668 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914800 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.914907 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dvvg\" (UniqueName: \"kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.915004 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.915102 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.915172 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.913767 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b99b-account-create-4558r" event={"ID":"7a782a05-3455-4ac2-973c-34e78a249789","Type":"ContainerStarted","Data":"006f5528d4ac13819ae67e1f09c4da44606e8a588ce040609b94677f72cf4b9e"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.918328 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b99b-account-create-4558r" event={"ID":"7a782a05-3455-4ac2-973c-34e78a249789","Type":"ContainerStarted","Data":"a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.918460 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.913553 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.919303 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.930553 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.930873 4991 generic.go:334] "Generic (PLEG): container finished" podID="388a587e-7633-4b2d-b4a4-4a083e06afbf" containerID="0e13299e6c9eb2dfe9fdce3abff0f771ca5df59f5e805467f9c8890c75118e0b" exitCode=0 Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.931625 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f564-account-create-kkgjq" event={"ID":"388a587e-7633-4b2d-b4a4-4a083e06afbf","Type":"ContainerDied","Data":"0e13299e6c9eb2dfe9fdce3abff0f771ca5df59f5e805467f9c8890c75118e0b"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.931724 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f564-account-create-kkgjq" event={"ID":"388a587e-7633-4b2d-b4a4-4a083e06afbf","Type":"ContainerStarted","Data":"20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee"} Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.971762 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wn2\" (UniqueName: \"kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2\") pod \"dnsmasq-dns-5c9d85d47c-w6zvr\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.983788 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b99b-account-create-4558r" podStartSLOduration=9.983766024 podStartE2EDuration="9.983766024s" podCreationTimestamp="2025-09-29 09:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:27.972338353 +0000 UTC m=+1243.828266381" watchObservedRunningTime="2025-09-29 09:58:27.983766024 +0000 UTC m=+1243.839694052" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.991975 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.996527 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:27 crc kubenswrapper[4991]: I0929 09:58:27.997226 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dvvg\" (UniqueName: \"kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.005833 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.013778 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.013876 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts\") pod \"keystone-bootstrap-r95jv\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.118203 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.125306 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.154611 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.216570 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.219115 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238106 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8559s"] Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238549 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238648 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238671 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238713 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.238768 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwq5\" (UniqueName: \"kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.239393 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.252374 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.252622 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kcnjd" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.252737 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.257015 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8559s"] Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.303192 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342030 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342105 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342140 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342157 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342181 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342197 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmktv\" (UniqueName: \"kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342225 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342264 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342294 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwq5\" (UniqueName: \"kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.342340 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.343387 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.343917 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.344117 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.344398 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.394723 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwq5\" (UniqueName: \"kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5\") pod \"dnsmasq-dns-6ffb94d8ff-fqngv\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.449247 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.449329 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.449408 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.449462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.449492 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmktv\" (UniqueName: \"kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.450218 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.459646 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.465619 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.471891 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.487216 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmktv\" (UniqueName: \"kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv\") pod \"placement-db-sync-8559s\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " pod="openstack/placement-db-sync-8559s" Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.949482 4991 generic.go:334] "Generic (PLEG): container finished" podID="43435cee-d9a2-4517-94c4-0e49cdd536a2" containerID="9bc063df50551ab284545468df59b2cbcac7f5606775bc48de48abbde514d8d1" exitCode=0 Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.950863 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1446-account-create-25hvn" event={"ID":"43435cee-d9a2-4517-94c4-0e49cdd536a2","Type":"ContainerDied","Data":"9bc063df50551ab284545468df59b2cbcac7f5606775bc48de48abbde514d8d1"} Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.955444 4991 generic.go:334] "Generic (PLEG): container finished" podID="0e8aa9ff-b96a-499b-b3d6-b744488777a8" containerID="6ac03472c3c649dbf55e36728f7b38a9ed9ee7b7696981076bdfe1e0f051aeca" exitCode=0 Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.955773 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-wbq45" event={"ID":"0e8aa9ff-b96a-499b-b3d6-b744488777a8","Type":"ContainerDied","Data":"6ac03472c3c649dbf55e36728f7b38a9ed9ee7b7696981076bdfe1e0f051aeca"} Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.965497 4991 generic.go:334] "Generic (PLEG): container finished" podID="fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" containerID="ae1305ea9fd9cfcca2b56a1ff4328353f52820080503f6f36201a13318f932ee" exitCode=0 Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.965923 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bf69-account-create-nrt9x" event={"ID":"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed","Type":"ContainerDied","Data":"ae1305ea9fd9cfcca2b56a1ff4328353f52820080503f6f36201a13318f932ee"} Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.976136 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a782a05-3455-4ac2-973c-34e78a249789" containerID="006f5528d4ac13819ae67e1f09c4da44606e8a588ce040609b94677f72cf4b9e" exitCode=0 Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.976219 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b99b-account-create-4558r" event={"ID":"7a782a05-3455-4ac2-973c-34e78a249789","Type":"ContainerDied","Data":"006f5528d4ac13819ae67e1f09c4da44606e8a588ce040609b94677f72cf4b9e"} Sep 29 09:58:28 crc kubenswrapper[4991]: I0929 09:58:28.987284 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.004873 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8559s" Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.019116 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"18cf1e64497023551fa2806cfc4e34a10dd8554ed352129639c44ee534058be0"} Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.025460 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9kck9" event={"ID":"2031f120-9626-495a-b555-e4e960d2e4b1","Type":"ContainerStarted","Data":"b05a1d18b9a32761c08f0e72ca473e560245447c5706e0a4a0aeba5832bcdde2"} Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.083721 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.104366 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9kck9" podStartSLOduration=4.576999247 podStartE2EDuration="19.104345566s" podCreationTimestamp="2025-09-29 09:58:10 +0000 UTC" firstStartedPulling="2025-09-29 09:58:12.108876057 +0000 UTC m=+1227.964804085" lastFinishedPulling="2025-09-29 09:58:26.636222376 +0000 UTC m=+1242.492150404" observedRunningTime="2025-09-29 09:58:29.071262805 +0000 UTC m=+1244.927190853" watchObservedRunningTime="2025-09-29 09:58:29.104345566 +0000 UTC m=+1244.960273594" Sep 29 09:58:29 crc kubenswrapper[4991]: I0929 09:58:29.175096 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r95jv"] Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.043323 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"b19d48e1c733f38f2a185170a810fd93e3bd32473eee6017eeb680eb194af5ed"} Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.045215 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r95jv" event={"ID":"dbfd13d2-73e8-4214-8fe8-7642bd42e1da","Type":"ContainerStarted","Data":"11081c9ad6c6b54a3974b3ea7e917514469be800290d83344e24e4c44a11c215"} Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.048792 4991 generic.go:334] "Generic (PLEG): container finished" podID="113be622-18ac-4d3e-b67f-51e928edcd73" containerID="d064e0e3395bf94ae85af49842b53cbc84ebde4c1ca3c18ff036d9f11875c461" exitCode=0 Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.048845 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" event={"ID":"113be622-18ac-4d3e-b67f-51e928edcd73","Type":"ContainerDied","Data":"d064e0e3395bf94ae85af49842b53cbc84ebde4c1ca3c18ff036d9f11875c461"} Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.048902 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" event={"ID":"113be622-18ac-4d3e-b67f-51e928edcd73","Type":"ContainerStarted","Data":"f26d689324e4309ba6e9fc05701e76f55f3a2a63bfc1c9f442d6486078a0cd12"} Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.226785 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8559s"] Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.232555 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.343588 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.345089 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.515891 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glfk\" (UniqueName: \"kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk\") pod \"388a587e-7633-4b2d-b4a4-4a083e06afbf\" (UID: \"388a587e-7633-4b2d-b4a4-4a083e06afbf\") " Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.517132 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb82s\" (UniqueName: \"kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s\") pod \"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6\" (UID: \"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6\") " Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.522306 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk" (OuterVolumeSpecName: "kube-api-access-2glfk") pod "388a587e-7633-4b2d-b4a4-4a083e06afbf" (UID: "388a587e-7633-4b2d-b4a4-4a083e06afbf"). InnerVolumeSpecName "kube-api-access-2glfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.522558 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s" (OuterVolumeSpecName: "kube-api-access-bb82s") pod "9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" (UID: "9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6"). InnerVolumeSpecName "kube-api-access-bb82s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.619802 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glfk\" (UniqueName: \"kubernetes.io/projected/388a587e-7633-4b2d-b4a4-4a083e06afbf-kube-api-access-2glfk\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.620182 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb82s\" (UniqueName: \"kubernetes.io/projected/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6-kube-api-access-bb82s\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:30 crc kubenswrapper[4991]: I0929 09:58:30.889051 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.059020 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067344 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrg7r\" (UniqueName: \"kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r\") pod \"7a782a05-3455-4ac2-973c-34e78a249789\" (UID: \"7a782a05-3455-4ac2-973c-34e78a249789\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067504 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlfsw\" (UniqueName: \"kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067571 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067613 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067745 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.067897 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.068073 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn\") pod \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\" (UID: \"0e8aa9ff-b96a-499b-b3d6-b744488777a8\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.068892 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.071534 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.071600 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run" (OuterVolumeSpecName: "var-run") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.072211 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.073095 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts" (OuterVolumeSpecName: "scripts") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.080745 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw" (OuterVolumeSpecName: "kube-api-access-wlfsw") pod "0e8aa9ff-b96a-499b-b3d6-b744488777a8" (UID: "0e8aa9ff-b96a-499b-b3d6-b744488777a8"). InnerVolumeSpecName "kube-api-access-wlfsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.085039 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r" (OuterVolumeSpecName: "kube-api-access-xrg7r") pod "7a782a05-3455-4ac2-973c-34e78a249789" (UID: "7a782a05-3455-4ac2-973c-34e78a249789"). InnerVolumeSpecName "kube-api-access-xrg7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.088588 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.149608 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.151124 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r95jv" event={"ID":"dbfd13d2-73e8-4214-8fe8-7642bd42e1da","Type":"ContainerStarted","Data":"c7440cb8422898707b454e9abd438855ac4ccc44df905479e8783b5603737899"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.151331 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.155794 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b99b-account-create-4558r" event={"ID":"7a782a05-3455-4ac2-973c-34e78a249789","Type":"ContainerDied","Data":"a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.155863 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c6fe8c8ca1089b09747fceddd1a5017a33de253b8b078fa96eeebc553c1749" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.155974 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b99b-account-create-4558r" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.166158 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerStarted","Data":"16da623df5ce43c4385b0bb6febf2e177887dcf5a0ff943ac68de52e468034bc"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.166196 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c003699b-006e-4a8b-9de5-2a153984ed1a","Type":"ContainerStarted","Data":"53f80fa9ab405cb8bbd8ee731bf9c3739e528b0ed2873100e940c3e123911fc4"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170301 4991 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170338 4991 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170351 4991 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e8aa9ff-b96a-499b-b3d6-b744488777a8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170364 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrg7r\" (UniqueName: \"kubernetes.io/projected/7a782a05-3455-4ac2-973c-34e78a249789-kube-api-access-xrg7r\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170376 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlfsw\" (UniqueName: \"kubernetes.io/projected/0e8aa9ff-b96a-499b-b3d6-b744488777a8-kube-api-access-wlfsw\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170386 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.170398 4991 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0e8aa9ff-b96a-499b-b3d6-b744488777a8-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.188499 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f564-account-create-kkgjq" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.188606 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f564-account-create-kkgjq" event={"ID":"388a587e-7633-4b2d-b4a4-4a083e06afbf","Type":"ContainerDied","Data":"20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.188631 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f97f3b32740242f2539c3100f0f898cf6b04ca2c484715136562beb71e79ee" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.212576 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"81d702cb-530c-441d-b686-f205337a2aea","Type":"ContainerStarted","Data":"dda3630b0169c17b8552cc6b8436a5a027601d0b8e28bcdf157c6d3043ee6efd"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.217863 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8559s" event={"ID":"19dc38ec-0933-47ed-8c1e-613d1d55d3d5","Type":"ContainerStarted","Data":"02dfcf086fcd9ac404a764600d72730aa4102709e719f9daa5d7c1f14b86ee11"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.229619 4991 generic.go:334] "Generic (PLEG): container finished" podID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerID="fe65c834f8d095d1e896bca50a7ea0d186fe92c3f465ea5a46254207b34025fd" exitCode=0 Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.229696 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" event={"ID":"a6d53e88-b87d-4a50-820d-373e638d62b1","Type":"ContainerDied","Data":"fe65c834f8d095d1e896bca50a7ea0d186fe92c3f465ea5a46254207b34025fd"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.229736 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" event={"ID":"a6d53e88-b87d-4a50-820d-373e638d62b1","Type":"ContainerStarted","Data":"44e354f037c947c2f559be693282c48d6439bed8553581c3c29eb66feb6be5a3"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.235747 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r95jv" podStartSLOduration=4.2357298740000004 podStartE2EDuration="4.235729874s" podCreationTimestamp="2025-09-29 09:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:31.232695714 +0000 UTC m=+1247.088623732" watchObservedRunningTime="2025-09-29 09:58:31.235729874 +0000 UTC m=+1247.091657902" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.247980 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zmljr-config-wbq45" event={"ID":"0e8aa9ff-b96a-499b-b3d6-b744488777a8","Type":"ContainerDied","Data":"46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.248035 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c5d26f44b31892c0211528b991d38c1c7e13f26e39ca9b059ee2f240702cb2" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.247995 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zmljr-config-wbq45" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.249862 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" event={"ID":"9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6","Type":"ContainerDied","Data":"798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b"} Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.249896 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798362d3d1d3bcedcd21122f353ca522d2b278127cd205c17bf832bb84040c9b" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.249942 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-2a92-account-create-lsvp7" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.278999 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb\") pod \"113be622-18ac-4d3e-b67f-51e928edcd73\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279040 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config\") pod \"113be622-18ac-4d3e-b67f-51e928edcd73\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279080 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc\") pod \"113be622-18ac-4d3e-b67f-51e928edcd73\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279147 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb\") pod \"113be622-18ac-4d3e-b67f-51e928edcd73\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279200 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8m6m\" (UniqueName: \"kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m\") pod \"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed\" (UID: \"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279291 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7wn2\" (UniqueName: \"kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2\") pod \"113be622-18ac-4d3e-b67f-51e928edcd73\" (UID: \"113be622-18ac-4d3e-b67f-51e928edcd73\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.279448 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46qms\" (UniqueName: \"kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms\") pod \"43435cee-d9a2-4517-94c4-0e49cdd536a2\" (UID: \"43435cee-d9a2-4517-94c4-0e49cdd536a2\") " Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.326203 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms" (OuterVolumeSpecName: "kube-api-access-46qms") pod "43435cee-d9a2-4517-94c4-0e49cdd536a2" (UID: "43435cee-d9a2-4517-94c4-0e49cdd536a2"). InnerVolumeSpecName "kube-api-access-46qms". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.336024 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2" (OuterVolumeSpecName: "kube-api-access-f7wn2") pod "113be622-18ac-4d3e-b67f-51e928edcd73" (UID: "113be622-18ac-4d3e-b67f-51e928edcd73"). InnerVolumeSpecName "kube-api-access-f7wn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.336652 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "113be622-18ac-4d3e-b67f-51e928edcd73" (UID: "113be622-18ac-4d3e-b67f-51e928edcd73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.338739 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.338718984 podStartE2EDuration="23.338718984s" podCreationTimestamp="2025-09-29 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:31.336404454 +0000 UTC m=+1247.192332482" watchObservedRunningTime="2025-09-29 09:58:31.338718984 +0000 UTC m=+1247.194647012" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.373105 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m" (OuterVolumeSpecName: "kube-api-access-l8m6m") pod "fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" (UID: "fc29d8c0-2bb0-40fe-b3f3-98b30719dfed"). InnerVolumeSpecName "kube-api-access-l8m6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.380250 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config" (OuterVolumeSpecName: "config") pod "113be622-18ac-4d3e-b67f-51e928edcd73" (UID: "113be622-18ac-4d3e-b67f-51e928edcd73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.381719 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8m6m\" (UniqueName: \"kubernetes.io/projected/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed-kube-api-access-l8m6m\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.381744 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7wn2\" (UniqueName: \"kubernetes.io/projected/113be622-18ac-4d3e-b67f-51e928edcd73-kube-api-access-f7wn2\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.381754 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46qms\" (UniqueName: \"kubernetes.io/projected/43435cee-d9a2-4517-94c4-0e49cdd536a2-kube-api-access-46qms\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.381763 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.381772 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.395874 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "113be622-18ac-4d3e-b67f-51e928edcd73" (UID: "113be622-18ac-4d3e-b67f-51e928edcd73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.472514 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "113be622-18ac-4d3e-b67f-51e928edcd73" (UID: "113be622-18ac-4d3e-b67f-51e928edcd73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.497654 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.497690 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113be622-18ac-4d3e-b67f-51e928edcd73-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.546052 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=48.862901759 podStartE2EDuration="59.546035351s" podCreationTimestamp="2025-09-29 09:57:32 +0000 UTC" firstStartedPulling="2025-09-29 09:58:06.65832335 +0000 UTC m=+1222.514251378" lastFinishedPulling="2025-09-29 09:58:17.341456942 +0000 UTC m=+1233.197384970" observedRunningTime="2025-09-29 09:58:31.543587647 +0000 UTC m=+1247.399515675" watchObservedRunningTime="2025-09-29 09:58:31.546035351 +0000 UTC m=+1247.401963379" Sep 29 09:58:31 crc kubenswrapper[4991]: I0929 09:58:31.999478 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zmljr-config-wbq45"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.018403 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zmljr-config-wbq45"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.040142 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.072924 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073383 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43435cee-d9a2-4517-94c4-0e49cdd536a2" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073396 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="43435cee-d9a2-4517-94c4-0e49cdd536a2" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073412 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8aa9ff-b96a-499b-b3d6-b744488777a8" containerName="ovn-config" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073418 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8aa9ff-b96a-499b-b3d6-b744488777a8" containerName="ovn-config" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073443 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073449 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073464 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388a587e-7633-4b2d-b4a4-4a083e06afbf" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073470 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="388a587e-7633-4b2d-b4a4-4a083e06afbf" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073481 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a782a05-3455-4ac2-973c-34e78a249789" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073487 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a782a05-3455-4ac2-973c-34e78a249789" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073515 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073522 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: E0929 09:58:32.073533 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113be622-18ac-4d3e-b67f-51e928edcd73" containerName="init" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073538 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="113be622-18ac-4d3e-b67f-51e928edcd73" containerName="init" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073737 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073750 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073769 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8aa9ff-b96a-499b-b3d6-b744488777a8" containerName="ovn-config" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073801 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a782a05-3455-4ac2-973c-34e78a249789" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073816 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="43435cee-d9a2-4517-94c4-0e49cdd536a2" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073831 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="113be622-18ac-4d3e-b67f-51e928edcd73" containerName="init" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.073844 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="388a587e-7633-4b2d-b4a4-4a083e06afbf" containerName="mariadb-account-create" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.074997 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.077814 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.094319 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215544 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq79l\" (UniqueName: \"kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215656 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215722 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215801 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215832 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.215898 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.260981 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bf69-account-create-nrt9x" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.260974 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bf69-account-create-nrt9x" event={"ID":"fc29d8c0-2bb0-40fe-b3f3-98b30719dfed","Type":"ContainerDied","Data":"d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd"} Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.261578 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d253653d4ba900bb42e7e2819f9a6ff6df722f45dabedcfc46486381e5a3cd" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.271439 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" event={"ID":"113be622-18ac-4d3e-b67f-51e928edcd73","Type":"ContainerDied","Data":"f26d689324e4309ba6e9fc05701e76f55f3a2a63bfc1c9f442d6486078a0cd12"} Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.271873 4991 scope.go:117] "RemoveContainer" containerID="d064e0e3395bf94ae85af49842b53cbc84ebde4c1ca3c18ff036d9f11875c461" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.272021 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-w6zvr" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.286792 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" event={"ID":"a6d53e88-b87d-4a50-820d-373e638d62b1","Type":"ContainerStarted","Data":"eeff5374bafd9784ea953d19209b307d7563027468fdd5403b8b6805b37c45fa"} Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.287479 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.296482 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1446-account-create-25hvn" event={"ID":"43435cee-d9a2-4517-94c4-0e49cdd536a2","Type":"ContainerDied","Data":"4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3"} Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.296614 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1446-account-create-25hvn" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.296619 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4779ef1d23605a481d7b5544440c8cb9dadf01bb67b51248396c6f5e0acbfff3" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319273 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq79l\" (UniqueName: \"kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319351 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319373 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319429 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319447 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.319495 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.320535 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.324324 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.324898 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.325426 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.328228 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.361567 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq79l\" (UniqueName: \"kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l\") pod \"dnsmasq-dns-fcfdd6f9f-9ndtq\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.368591 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.381743 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-w6zvr"] Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.390856 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" podStartSLOduration=4.390835596 podStartE2EDuration="4.390835596s" podCreationTimestamp="2025-09-29 09:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:32.350595187 +0000 UTC m=+1248.206523215" watchObservedRunningTime="2025-09-29 09:58:32.390835596 +0000 UTC m=+1248.246763634" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.407450 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.965677 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8aa9ff-b96a-499b-b3d6-b744488777a8" path="/var/lib/kubelet/pods/0e8aa9ff-b96a-499b-b3d6-b744488777a8/volumes" Sep 29 09:58:32 crc kubenswrapper[4991]: I0929 09:58:32.966702 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113be622-18ac-4d3e-b67f-51e928edcd73" path="/var/lib/kubelet/pods/113be622-18ac-4d3e-b67f-51e928edcd73/volumes" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.011498 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.241332 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d5lhk"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.242835 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.244627 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.245114 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xh5td" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.245316 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.252156 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d5lhk"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.309479 4991 generic.go:334] "Generic (PLEG): container finished" podID="73447648-89a4-40e2-8645-279058b94921" containerID="1fb17ca1e2480099597a9a8228cfe40424b6d0d09b1a85165c8c8338457ace69" exitCode=0 Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.309540 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" event={"ID":"73447648-89a4-40e2-8645-279058b94921","Type":"ContainerDied","Data":"1fb17ca1e2480099597a9a8228cfe40424b6d0d09b1a85165c8c8338457ace69"} Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.309567 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" event={"ID":"73447648-89a4-40e2-8645-279058b94921","Type":"ContainerStarted","Data":"58542e18307be88f909cfc7a3eff7870d7500a62821c5e3c0117060169458afc"} Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.316149 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="dnsmasq-dns" containerID="cri-o://eeff5374bafd9784ea953d19209b307d7563027468fdd5403b8b6805b37c45fa" gracePeriod=10 Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356089 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356240 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356271 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356376 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356419 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mbc\" (UniqueName: \"kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.356478 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.446233 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6ddmh"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.448048 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.457382 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zhvvg"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.460620 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462331 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462367 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mbc\" (UniqueName: \"kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462391 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462428 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462559 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.462580 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.464230 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.464434 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.464727 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.464915 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ws9np" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.465220 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dzzc5" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.471990 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.473152 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.475842 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.476292 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.493009 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mbc\" (UniqueName: \"kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc\") pod \"cinder-db-sync-d5lhk\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.493293 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6ddmh"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.522204 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zhvvg"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.563986 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.564038 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.564074 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrl5\" (UniqueName: \"kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.564130 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.564190 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.564252 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxdr\" (UniqueName: \"kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.567580 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.666413 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.666810 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.666855 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrl5\" (UniqueName: \"kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.666943 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.667035 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.667122 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxdr\" (UniqueName: \"kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.682157 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.682209 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.684528 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.684938 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.685340 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxdr\" (UniqueName: \"kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr\") pod \"heat-db-sync-zhvvg\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.687056 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrl5\" (UniqueName: \"kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5\") pod \"barbican-db-sync-6ddmh\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.760892 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6tq6l"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.764968 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.768301 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.768371 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.768640 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2xv2l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.787428 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6tq6l"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.844589 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.871484 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d862j\" (UniqueName: \"kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.871594 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.871638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.878279 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zhvvg" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.882116 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.883611 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.891774 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.895856 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.918430 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.976216 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.976775 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.976967 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.977067 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.977334 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmssl\" (UniqueName: \"kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.977538 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d862j\" (UniqueName: \"kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.988596 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:33 crc kubenswrapper[4991]: I0929 09:58:33.990822 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.015120 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d862j\" (UniqueName: \"kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j\") pod \"neutron-db-sync-6tq6l\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.081116 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.081262 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.081297 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmssl\" (UniqueName: \"kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.088454 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.102567 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.107258 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmssl\" (UniqueName: \"kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl\") pod \"mysqld-exporter-0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.247241 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.248312 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.306121 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d5lhk"] Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.399943 4991 generic.go:334] "Generic (PLEG): container finished" podID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerID="eeff5374bafd9784ea953d19209b307d7563027468fdd5403b8b6805b37c45fa" exitCode=0 Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.400017 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" event={"ID":"a6d53e88-b87d-4a50-820d-373e638d62b1","Type":"ContainerDied","Data":"eeff5374bafd9784ea953d19209b307d7563027468fdd5403b8b6805b37c45fa"} Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.403250 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" event={"ID":"73447648-89a4-40e2-8645-279058b94921","Type":"ContainerStarted","Data":"e052251e6dcfdf766f57a89ab122fb953d8cada922330e1913712c805b9cd1ad"} Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.403562 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:34 crc kubenswrapper[4991]: W0929 09:58:34.435398 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5146a80a_6cba_46a8_85b8_1fb0dd8304cd.slice/crio-4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7 WatchSource:0}: Error finding container 4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7: Status 404 returned error can't find the container with id 4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7 Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.464632 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.466866 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.467129 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.472657 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.475879 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.490917 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" podStartSLOduration=2.490890748 podStartE2EDuration="2.490890748s" podCreationTimestamp="2025-09-29 09:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:34.459737798 +0000 UTC m=+1250.315665826" watchObservedRunningTime="2025-09-29 09:58:34.490890748 +0000 UTC m=+1250.346818776" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.492723 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwq5\" (UniqueName: \"kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5\") pod \"a6d53e88-b87d-4a50-820d-373e638d62b1\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.492879 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config\") pod \"a6d53e88-b87d-4a50-820d-373e638d62b1\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493039 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb\") pod \"a6d53e88-b87d-4a50-820d-373e638d62b1\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493084 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb\") pod \"a6d53e88-b87d-4a50-820d-373e638d62b1\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493128 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc\") pod \"a6d53e88-b87d-4a50-820d-373e638d62b1\" (UID: \"a6d53e88-b87d-4a50-820d-373e638d62b1\") " Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493579 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493612 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493667 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9k2\" (UniqueName: \"kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493690 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493734 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493836 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.493967 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.529465 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5" (OuterVolumeSpecName: "kube-api-access-rkwq5") pod "a6d53e88-b87d-4a50-820d-373e638d62b1" (UID: "a6d53e88-b87d-4a50-820d-373e638d62b1"). InnerVolumeSpecName "kube-api-access-rkwq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.573296 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602025 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602067 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602094 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9k2\" (UniqueName: \"kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602110 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602139 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602185 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602252 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.602325 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwq5\" (UniqueName: \"kubernetes.io/projected/a6d53e88-b87d-4a50-820d-373e638d62b1-kube-api-access-rkwq5\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.604581 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.613040 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.613283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.618702 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.630905 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.637147 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9k2\" (UniqueName: \"kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.640014 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts\") pod \"ceilometer-0\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.665020 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6d53e88-b87d-4a50-820d-373e638d62b1" (UID: "a6d53e88-b87d-4a50-820d-373e638d62b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.684237 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6d53e88-b87d-4a50-820d-373e638d62b1" (UID: "a6d53e88-b87d-4a50-820d-373e638d62b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.707785 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.707836 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.761193 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zhvvg"] Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.768030 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6d53e88-b87d-4a50-820d-373e638d62b1" (UID: "a6d53e88-b87d-4a50-820d-373e638d62b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.770453 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.801320 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config" (OuterVolumeSpecName: "config") pod "a6d53e88-b87d-4a50-820d-373e638d62b1" (UID: "a6d53e88-b87d-4a50-820d-373e638d62b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.814869 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.814973 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d53e88-b87d-4a50-820d-373e638d62b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:34 crc kubenswrapper[4991]: I0929 09:58:34.942204 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6ddmh"] Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.251554 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6tq6l"] Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.266766 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.417531 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5lhk" event={"ID":"5146a80a-6cba-46a8-85b8-1fb0dd8304cd","Type":"ContainerStarted","Data":"4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7"} Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.420126 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" event={"ID":"a6d53e88-b87d-4a50-820d-373e638d62b1","Type":"ContainerDied","Data":"44e354f037c947c2f559be693282c48d6439bed8553581c3c29eb66feb6be5a3"} Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.420170 4991 scope.go:117] "RemoveContainer" containerID="eeff5374bafd9784ea953d19209b307d7563027468fdd5403b8b6805b37c45fa" Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.420303 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-fqngv" Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.424182 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zhvvg" event={"ID":"429e29da-7361-452f-8a5e-f42633c6d4b9","Type":"ContainerStarted","Data":"8ffc85d36ff3893b4521efce289317c7a15613e6cc83a2a4443025b8cdcb63f0"} Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.465994 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.477549 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-fqngv"] Sep 29 09:58:35 crc kubenswrapper[4991]: I0929 09:58:35.514782 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 09:58:36 crc kubenswrapper[4991]: I0929 09:58:36.442416 4991 generic.go:334] "Generic (PLEG): container finished" podID="dbfd13d2-73e8-4214-8fe8-7642bd42e1da" containerID="c7440cb8422898707b454e9abd438855ac4ccc44df905479e8783b5603737899" exitCode=0 Sep 29 09:58:36 crc kubenswrapper[4991]: I0929 09:58:36.442685 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r95jv" event={"ID":"dbfd13d2-73e8-4214-8fe8-7642bd42e1da","Type":"ContainerDied","Data":"c7440cb8422898707b454e9abd438855ac4ccc44df905479e8783b5603737899"} Sep 29 09:58:36 crc kubenswrapper[4991]: I0929 09:58:36.941990 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" path="/var/lib/kubelet/pods/a6d53e88-b87d-4a50-820d-373e638d62b1/volumes" Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.386106 4991 scope.go:117] "RemoveContainer" containerID="fe65c834f8d095d1e896bca50a7ea0d186fe92c3f465ea5a46254207b34025fd" Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.463413 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"02856571-708d-4e22-a2cb-9b411d5a12c0","Type":"ContainerStarted","Data":"6b1fe8c0ced435082ca5a1b2b877c9966acd61675c14c250566daee0b8b8e8c5"} Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.474413 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerStarted","Data":"417e7a67e311554ad17777199d830735372aec7a3473514bcd94c77fc389dfb9"} Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.476974 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6ddmh" event={"ID":"2ed2429f-e06f-4c9f-9c92-84203d8073c1","Type":"ContainerStarted","Data":"391b6d6d3d5c0ff2bbcb64b7eb23e72acd134638a620ce76ff92ee27cbf3f8b7"} Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.478583 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6tq6l" event={"ID":"ec575cb9-949d-4880-a804-dfc7cb8c7eb9","Type":"ContainerStarted","Data":"f71062c333bc9bf6aed5038162992083e76cafc315d08c412e55702751e77f05"} Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.947025 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.947547 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.947589 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.948440 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 09:58:37 crc kubenswrapper[4991]: I0929 09:58:37.948495 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a" gracePeriod=600 Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.035849 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.125719 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.125813 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.125861 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dvvg\" (UniqueName: \"kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.125911 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.126013 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.126113 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data\") pod \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\" (UID: \"dbfd13d2-73e8-4214-8fe8-7642bd42e1da\") " Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.132057 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.135387 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg" (OuterVolumeSpecName: "kube-api-access-4dvvg") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "kube-api-access-4dvvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.135624 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts" (OuterVolumeSpecName: "scripts") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.136251 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.159629 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.167732 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data" (OuterVolumeSpecName: "config-data") pod "dbfd13d2-73e8-4214-8fe8-7642bd42e1da" (UID: "dbfd13d2-73e8-4214-8fe8-7642bd42e1da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234868 4991 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234902 4991 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234913 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dvvg\" (UniqueName: \"kubernetes.io/projected/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-kube-api-access-4dvvg\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234923 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234931 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.234939 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfd13d2-73e8-4214-8fe8-7642bd42e1da-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.502827 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a"} Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.502770 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a" exitCode=0 Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.503242 4991 scope.go:117] "RemoveContainer" containerID="e7779fdb3bc2d0c900ebebd790f1060c4e3f133862501be62eb494f0ee1b0541" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.507017 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r95jv" event={"ID":"dbfd13d2-73e8-4214-8fe8-7642bd42e1da","Type":"ContainerDied","Data":"11081c9ad6c6b54a3974b3ea7e917514469be800290d83344e24e4c44a11c215"} Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.507048 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11081c9ad6c6b54a3974b3ea7e917514469be800290d83344e24e4c44a11c215" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.507111 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r95jv" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.512371 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6tq6l" event={"ID":"ec575cb9-949d-4880-a804-dfc7cb8c7eb9","Type":"ContainerStarted","Data":"19c4a6214d409564d4f5a1352ae9712b631fca460bd8274e4217f7075ad8fdd7"} Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.514727 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8559s" event={"ID":"19dc38ec-0933-47ed-8c1e-613d1d55d3d5","Type":"ContainerStarted","Data":"9287037ed363a7e0eeefa3e352e6a48227d02dc900d8b365288f181207296038"} Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.545870 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r95jv"] Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.563375 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r95jv"] Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.580262 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6tq6l" podStartSLOduration=5.580227068 podStartE2EDuration="5.580227068s" podCreationTimestamp="2025-09-29 09:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:58:38.534761701 +0000 UTC m=+1254.390689729" watchObservedRunningTime="2025-09-29 09:58:38.580227068 +0000 UTC m=+1254.436155096" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.594565 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8559s" podStartSLOduration=3.368226831 podStartE2EDuration="10.594541935s" podCreationTimestamp="2025-09-29 09:58:28 +0000 UTC" firstStartedPulling="2025-09-29 09:58:30.256557802 +0000 UTC m=+1246.112485830" lastFinishedPulling="2025-09-29 09:58:37.482872906 +0000 UTC m=+1253.338800934" observedRunningTime="2025-09-29 09:58:38.54913934 +0000 UTC m=+1254.405067368" watchObservedRunningTime="2025-09-29 09:58:38.594541935 +0000 UTC m=+1254.450469963" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624009 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jx6z5"] Sep 29 09:58:38 crc kubenswrapper[4991]: E0929 09:58:38.624462 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfd13d2-73e8-4214-8fe8-7642bd42e1da" containerName="keystone-bootstrap" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624479 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfd13d2-73e8-4214-8fe8-7642bd42e1da" containerName="keystone-bootstrap" Sep 29 09:58:38 crc kubenswrapper[4991]: E0929 09:58:38.624504 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="dnsmasq-dns" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624510 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="dnsmasq-dns" Sep 29 09:58:38 crc kubenswrapper[4991]: E0929 09:58:38.624526 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="init" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624532 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="init" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624711 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d53e88-b87d-4a50-820d-373e638d62b1" containerName="dnsmasq-dns" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.624732 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfd13d2-73e8-4214-8fe8-7642bd42e1da" containerName="keystone-bootstrap" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.625525 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.629405 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rf9rz" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.629582 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.629733 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.629885 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.640767 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jx6z5"] Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743563 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743614 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4dd\" (UniqueName: \"kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743652 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743700 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743759 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.743790 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.845958 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.846017 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.846113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.846141 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz4dd\" (UniqueName: \"kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.846163 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.846198 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.853031 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.853184 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.853660 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.854655 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.864311 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.867342 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz4dd\" (UniqueName: \"kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd\") pod \"keystone-bootstrap-jx6z5\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.919825 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.948765 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfd13d2-73e8-4214-8fe8-7642bd42e1da" path="/var/lib/kubelet/pods/dbfd13d2-73e8-4214-8fe8-7642bd42e1da/volumes" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.950026 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:38 crc kubenswrapper[4991]: I0929 09:58:38.954712 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:58:39 crc kubenswrapper[4991]: I0929 09:58:39.551182 4991 generic.go:334] "Generic (PLEG): container finished" podID="2031f120-9626-495a-b555-e4e960d2e4b1" containerID="b05a1d18b9a32761c08f0e72ca473e560245447c5706e0a4a0aeba5832bcdde2" exitCode=0 Sep 29 09:58:39 crc kubenswrapper[4991]: I0929 09:58:39.551228 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9kck9" event={"ID":"2031f120-9626-495a-b555-e4e960d2e4b1","Type":"ContainerDied","Data":"b05a1d18b9a32761c08f0e72ca473e560245447c5706e0a4a0aeba5832bcdde2"} Sep 29 09:58:39 crc kubenswrapper[4991]: I0929 09:58:39.560538 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.218518 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.332083 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data\") pod \"2031f120-9626-495a-b555-e4e960d2e4b1\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.332240 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle\") pod \"2031f120-9626-495a-b555-e4e960d2e4b1\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.332355 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data\") pod \"2031f120-9626-495a-b555-e4e960d2e4b1\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.332397 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvj4n\" (UniqueName: \"kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n\") pod \"2031f120-9626-495a-b555-e4e960d2e4b1\" (UID: \"2031f120-9626-495a-b555-e4e960d2e4b1\") " Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.354112 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2031f120-9626-495a-b555-e4e960d2e4b1" (UID: "2031f120-9626-495a-b555-e4e960d2e4b1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.359367 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n" (OuterVolumeSpecName: "kube-api-access-xvj4n") pod "2031f120-9626-495a-b555-e4e960d2e4b1" (UID: "2031f120-9626-495a-b555-e4e960d2e4b1"). InnerVolumeSpecName "kube-api-access-xvj4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.371340 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2031f120-9626-495a-b555-e4e960d2e4b1" (UID: "2031f120-9626-495a-b555-e4e960d2e4b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.390974 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data" (OuterVolumeSpecName: "config-data") pod "2031f120-9626-495a-b555-e4e960d2e4b1" (UID: "2031f120-9626-495a-b555-e4e960d2e4b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.409981 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.434566 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.434909 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.434993 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvj4n\" (UniqueName: \"kubernetes.io/projected/2031f120-9626-495a-b555-e4e960d2e4b1-kube-api-access-xvj4n\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.435063 4991 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2031f120-9626-495a-b555-e4e960d2e4b1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.464364 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.466310 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" containerID="cri-o://4032990bc691747e5a559241e27500181846fc8842f1ef005515844990e7c6b0" gracePeriod=10 Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.610965 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9kck9" event={"ID":"2031f120-9626-495a-b555-e4e960d2e4b1","Type":"ContainerDied","Data":"68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c"} Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.611001 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9kck9" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.611012 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ea2387d953e0de1a813c47801cf6817a995faaea1f07b9031a91fa1576021c" Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.615559 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1"} Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.623146 4991 generic.go:334] "Generic (PLEG): container finished" podID="3537cb6d-a346-4779-820c-262f5ae33b35" containerID="4032990bc691747e5a559241e27500181846fc8842f1ef005515844990e7c6b0" exitCode=0 Sep 29 09:58:42 crc kubenswrapper[4991]: I0929 09:58:42.623202 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" event={"ID":"3537cb6d-a346-4779-820c-262f5ae33b35","Type":"ContainerDied","Data":"4032990bc691747e5a559241e27500181846fc8842f1ef005515844990e7c6b0"} Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.610356 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:58:43 crc kubenswrapper[4991]: E0929 09:58:43.611470 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2031f120-9626-495a-b555-e4e960d2e4b1" containerName="glance-db-sync" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.611491 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2031f120-9626-495a-b555-e4e960d2e4b1" containerName="glance-db-sync" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.611782 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2031f120-9626-495a-b555-e4e960d2e4b1" containerName="glance-db-sync" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.614619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.623466 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.668844 4991 generic.go:334] "Generic (PLEG): container finished" podID="19dc38ec-0933-47ed-8c1e-613d1d55d3d5" containerID="9287037ed363a7e0eeefa3e352e6a48227d02dc900d8b365288f181207296038" exitCode=0 Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.669221 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8559s" event={"ID":"19dc38ec-0933-47ed-8c1e-613d1d55d3d5","Type":"ContainerDied","Data":"9287037ed363a7e0eeefa3e352e6a48227d02dc900d8b365288f181207296038"} Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.772558 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.772913 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.772938 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.773151 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gm6r\" (UniqueName: \"kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.773220 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.773361 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876095 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876191 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876222 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876346 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gm6r\" (UniqueName: \"kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876383 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.876523 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.879607 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.879691 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.879854 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.879903 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.879929 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:43 crc kubenswrapper[4991]: I0929 09:58:43.899384 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gm6r\" (UniqueName: \"kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r\") pod \"dnsmasq-dns-57c957c4ff-xq2lz\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.011751 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.491606 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.494288 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.497538 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.497729 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.498228 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wkgtn" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.523487 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.594487 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.594880 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.595055 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.595193 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.595295 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.595508 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.595784 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfr5h\" (UniqueName: \"kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.697898 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfr5h\" (UniqueName: \"kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.698002 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.698029 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.698091 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.698126 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.698154 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.702382 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.704367 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.704650 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.705347 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.711608 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.718375 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.737814 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.738829 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfr5h\" (UniqueName: \"kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.754347 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.776096 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.779204 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.787265 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.797502 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.831603 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.928877 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.929459 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.929524 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.929588 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.929633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.930021 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:44 crc kubenswrapper[4991]: I0929 09:58:44.930206 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45p5r\" (UniqueName: \"kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034063 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034145 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034176 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034205 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034332 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034387 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45p5r\" (UniqueName: \"kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.034451 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.036568 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.037380 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.037670 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.042454 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.042814 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.066274 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45p5r\" (UniqueName: \"kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.087304 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.087607 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.097644 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.192933 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:58:45 crc kubenswrapper[4991]: I0929 09:58:45.572346 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Sep 29 09:58:46 crc kubenswrapper[4991]: I0929 09:58:46.781472 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:58:46 crc kubenswrapper[4991]: I0929 09:58:46.863586 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:58:50 crc kubenswrapper[4991]: I0929 09:58:50.572104 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.666872 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8559s" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.765052 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8559s" event={"ID":"19dc38ec-0933-47ed-8c1e-613d1d55d3d5","Type":"ContainerDied","Data":"02dfcf086fcd9ac404a764600d72730aa4102709e719f9daa5d7c1f14b86ee11"} Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.765092 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02dfcf086fcd9ac404a764600d72730aa4102709e719f9daa5d7c1f14b86ee11" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.765093 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8559s" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.850747 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle\") pod \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.850806 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmktv\" (UniqueName: \"kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv\") pod \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.851021 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts\") pod \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.851060 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs\") pod \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.851199 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data\") pod \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\" (UID: \"19dc38ec-0933-47ed-8c1e-613d1d55d3d5\") " Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.852514 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs" (OuterVolumeSpecName: "logs") pod "19dc38ec-0933-47ed-8c1e-613d1d55d3d5" (UID: "19dc38ec-0933-47ed-8c1e-613d1d55d3d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.859723 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts" (OuterVolumeSpecName: "scripts") pod "19dc38ec-0933-47ed-8c1e-613d1d55d3d5" (UID: "19dc38ec-0933-47ed-8c1e-613d1d55d3d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.882760 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv" (OuterVolumeSpecName: "kube-api-access-nmktv") pod "19dc38ec-0933-47ed-8c1e-613d1d55d3d5" (UID: "19dc38ec-0933-47ed-8c1e-613d1d55d3d5"). InnerVolumeSpecName "kube-api-access-nmktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.897828 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data" (OuterVolumeSpecName: "config-data") pod "19dc38ec-0933-47ed-8c1e-613d1d55d3d5" (UID: "19dc38ec-0933-47ed-8c1e-613d1d55d3d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.913591 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19dc38ec-0933-47ed-8c1e-613d1d55d3d5" (UID: "19dc38ec-0933-47ed-8c1e-613d1d55d3d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.963662 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.963730 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.963872 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmktv\" (UniqueName: \"kubernetes.io/projected/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-kube-api-access-nmktv\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.963976 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:52 crc kubenswrapper[4991]: I0929 09:58:52.964017 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19dc38ec-0933-47ed-8c1e-613d1d55d3d5-logs\") on node \"crc\" DevicePath \"\"" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.784986 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78c57c9496-sh67d"] Sep 29 09:58:53 crc kubenswrapper[4991]: E0929 09:58:53.785727 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dc38ec-0933-47ed-8c1e-613d1d55d3d5" containerName="placement-db-sync" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.785742 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dc38ec-0933-47ed-8c1e-613d1d55d3d5" containerName="placement-db-sync" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.786013 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dc38ec-0933-47ed-8c1e-613d1d55d3d5" containerName="placement-db-sync" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.787165 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.789900 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.790148 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.790204 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kcnjd" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.790315 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.794306 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.828922 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78c57c9496-sh67d"] Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-scripts\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-config-data\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-combined-ca-bundle\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984752 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-internal-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984771 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6p6\" (UniqueName: \"kubernetes.io/projected/4f154fb6-f887-4766-8374-5956d39b267c-kube-api-access-lp6p6\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984826 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-public-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:53 crc kubenswrapper[4991]: I0929 09:58:53.984885 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f154fb6-f887-4766-8374-5956d39b267c-logs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086616 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-public-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086689 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f154fb6-f887-4766-8374-5956d39b267c-logs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086795 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-scripts\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086830 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-config-data\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086874 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-combined-ca-bundle\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086894 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-internal-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.086909 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6p6\" (UniqueName: \"kubernetes.io/projected/4f154fb6-f887-4766-8374-5956d39b267c-kube-api-access-lp6p6\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.115970 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6p6\" (UniqueName: \"kubernetes.io/projected/4f154fb6-f887-4766-8374-5956d39b267c-kube-api-access-lp6p6\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.116562 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f154fb6-f887-4766-8374-5956d39b267c-logs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.117184 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-internal-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.118906 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-public-tls-certs\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.118985 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-combined-ca-bundle\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.120219 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-scripts\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.120812 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f154fb6-f887-4766-8374-5956d39b267c-config-data\") pod \"placement-78c57c9496-sh67d\" (UID: \"4f154fb6-f887-4766-8374-5956d39b267c\") " pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:54 crc kubenswrapper[4991]: I0929 09:58:54.419998 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:58:56 crc kubenswrapper[4991]: I0929 09:58:56.810206 4991 generic.go:334] "Generic (PLEG): container finished" podID="ec575cb9-949d-4880-a804-dfc7cb8c7eb9" containerID="19c4a6214d409564d4f5a1352ae9712b631fca460bd8274e4217f7075ad8fdd7" exitCode=0 Sep 29 09:58:56 crc kubenswrapper[4991]: I0929 09:58:56.810320 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6tq6l" event={"ID":"ec575cb9-949d-4880-a804-dfc7cb8c7eb9","Type":"ContainerDied","Data":"19c4a6214d409564d4f5a1352ae9712b631fca460bd8274e4217f7075ad8fdd7"} Sep 29 09:59:00 crc kubenswrapper[4991]: I0929 09:59:00.571585 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Sep 29 09:59:00 crc kubenswrapper[4991]: I0929 09:59:00.572613 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.459221 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.459933 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldrl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6ddmh_openstack(2ed2429f-e06f-4c9f-9c92-84203d8073c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.461111 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6ddmh" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.776980 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.777405 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zhvvg_openstack(429e29da-7361-452f-8a5e-f42633c6d4b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.778647 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zhvvg" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.884154 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-zhvvg" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" Sep 29 09:59:03 crc kubenswrapper[4991]: E0929 09:59:03.885432 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6ddmh" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" Sep 29 09:59:04 crc kubenswrapper[4991]: E0929 09:59:04.207516 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Sep 29 09:59:04 crc kubenswrapper[4991]: E0929 09:59:04.207792 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h56bh58h5f4h685hbbh5dhbbh684h6fh5c6h55ch678hdfh68bh65h695hb7h98h57fh5f4h54ch565h79hcfh66h688h58h57bh5d9hb6h594q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h9k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(386f2f3b-cfa3-4376-a958-8905da139c79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.392447 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.398894 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.448912 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb\") pod \"3537cb6d-a346-4779-820c-262f5ae33b35\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449013 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config\") pod \"3537cb6d-a346-4779-820c-262f5ae33b35\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449044 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb\") pod \"3537cb6d-a346-4779-820c-262f5ae33b35\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449108 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d862j\" (UniqueName: \"kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j\") pod \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449170 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8v5\" (UniqueName: \"kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5\") pod \"3537cb6d-a346-4779-820c-262f5ae33b35\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449214 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc\") pod \"3537cb6d-a346-4779-820c-262f5ae33b35\" (UID: \"3537cb6d-a346-4779-820c-262f5ae33b35\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449270 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config\") pod \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.449300 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle\") pod \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\" (UID: \"ec575cb9-949d-4880-a804-dfc7cb8c7eb9\") " Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.454339 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j" (OuterVolumeSpecName: "kube-api-access-d862j") pod "ec575cb9-949d-4880-a804-dfc7cb8c7eb9" (UID: "ec575cb9-949d-4880-a804-dfc7cb8c7eb9"). InnerVolumeSpecName "kube-api-access-d862j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.454413 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5" (OuterVolumeSpecName: "kube-api-access-ps8v5") pod "3537cb6d-a346-4779-820c-262f5ae33b35" (UID: "3537cb6d-a346-4779-820c-262f5ae33b35"). InnerVolumeSpecName "kube-api-access-ps8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.479996 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config" (OuterVolumeSpecName: "config") pod "ec575cb9-949d-4880-a804-dfc7cb8c7eb9" (UID: "ec575cb9-949d-4880-a804-dfc7cb8c7eb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.500753 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec575cb9-949d-4880-a804-dfc7cb8c7eb9" (UID: "ec575cb9-949d-4880-a804-dfc7cb8c7eb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.530942 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config" (OuterVolumeSpecName: "config") pod "3537cb6d-a346-4779-820c-262f5ae33b35" (UID: "3537cb6d-a346-4779-820c-262f5ae33b35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.531094 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3537cb6d-a346-4779-820c-262f5ae33b35" (UID: "3537cb6d-a346-4779-820c-262f5ae33b35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.548383 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3537cb6d-a346-4779-820c-262f5ae33b35" (UID: "3537cb6d-a346-4779-820c-262f5ae33b35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.549785 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3537cb6d-a346-4779-820c-262f5ae33b35" (UID: "3537cb6d-a346-4779-820c-262f5ae33b35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551541 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d862j\" (UniqueName: \"kubernetes.io/projected/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-kube-api-access-d862j\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551583 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8v5\" (UniqueName: \"kubernetes.io/projected/3537cb6d-a346-4779-820c-262f5ae33b35-kube-api-access-ps8v5\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551595 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551606 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551616 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec575cb9-949d-4880-a804-dfc7cb8c7eb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551629 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551641 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.551652 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3537cb6d-a346-4779-820c-262f5ae33b35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.896423 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" event={"ID":"3537cb6d-a346-4779-820c-262f5ae33b35","Type":"ContainerDied","Data":"61e9c9d8a189d39500e795fe2cde8595bed33c4efaff37008831c31131c37d64"} Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.896490 4991 scope.go:117] "RemoveContainer" containerID="4032990bc691747e5a559241e27500181846fc8842f1ef005515844990e7c6b0" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.896517 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.899307 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6tq6l" event={"ID":"ec575cb9-949d-4880-a804-dfc7cb8c7eb9","Type":"ContainerDied","Data":"f71062c333bc9bf6aed5038162992083e76cafc315d08c412e55702751e77f05"} Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.899340 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71062c333bc9bf6aed5038162992083e76cafc315d08c412e55702751e77f05" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.899366 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6tq6l" Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.963712 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:59:04 crc kubenswrapper[4991]: I0929 09:59:04.969369 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8qbbc"] Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.572578 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8qbbc" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.685555 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.742309 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.742840 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec575cb9-949d-4880-a804-dfc7cb8c7eb9" containerName="neutron-db-sync" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.742866 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec575cb9-949d-4880-a804-dfc7cb8c7eb9" containerName="neutron-db-sync" Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.742913 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="init" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.742924 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="init" Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.742970 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.742980 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.743259 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" containerName="dnsmasq-dns" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.743288 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec575cb9-949d-4880-a804-dfc7cb8c7eb9" containerName="neutron-db-sync" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.744762 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.752784 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.753015 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8mbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d5lhk_openstack(5146a80a-6cba-46a8-85b8-1fb0dd8304cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 09:59:05 crc kubenswrapper[4991]: E0929 09:59:05.754191 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d5lhk" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.784746 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.834796 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.837828 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.840540 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.840940 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.841206 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.841340 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2xv2l" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.880152 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.880619 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.880828 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.881010 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzcn\" (UniqueName: \"kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.881072 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.881151 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.914462 4991 scope.go:117] "RemoveContainer" containerID="74534548f596ca46489bf61abe8d939e48497411ced989caa776d7ac74954d33" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.956423 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.984807 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.984862 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhzcn\" (UniqueName: \"kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.984898 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.984941 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985000 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985027 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985063 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985117 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985143 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985239 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2r5\" (UniqueName: \"kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.985282 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.987014 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.987847 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.988616 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.988781 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:05 crc kubenswrapper[4991]: I0929 09:59:05.989100 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.061003 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhzcn\" (UniqueName: \"kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn\") pod \"dnsmasq-dns-5ccc5c4795-gwbq8\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.088114 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2r5\" (UniqueName: \"kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.088888 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.089083 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.089226 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.089383 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.098943 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: E0929 09:59:06.128278 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d5lhk" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.136257 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.148393 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.149046 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.162039 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2r5\" (UniqueName: \"kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5\") pod \"neutron-67bcc55b76-swg9q\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.436534 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.453329 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.964376 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3537cb6d-a346-4779-820c-262f5ae33b35" path="/var/lib/kubelet/pods/3537cb6d-a346-4779-820c-262f5ae33b35/volumes" Sep 29 09:59:06 crc kubenswrapper[4991]: I0929 09:59:06.966570 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jx6z5"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.101335 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.117492 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"02856571-708d-4e22-a2cb-9b411d5a12c0","Type":"ContainerStarted","Data":"e9e141d9831e2558c4b1d523da2617ef2995263cb01cc6c466cbada9feb0100d"} Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.132029 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jx6z5" event={"ID":"4880dbaa-8b65-4f99-929f-e9613339d1d9","Type":"ContainerStarted","Data":"5c4e9ccb95f3f2b50a4bca8ecbf2831496a232f4dd40ccb641db8ba6d985a940"} Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.184074 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=7.351110387 podStartE2EDuration="34.184046909s" podCreationTimestamp="2025-09-29 09:58:33 +0000 UTC" firstStartedPulling="2025-09-29 09:58:37.386245213 +0000 UTC m=+1253.242173251" lastFinishedPulling="2025-09-29 09:59:04.219181745 +0000 UTC m=+1280.075109773" observedRunningTime="2025-09-29 09:59:07.140637606 +0000 UTC m=+1282.996565664" watchObservedRunningTime="2025-09-29 09:59:07.184046909 +0000 UTC m=+1283.039974967" Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.200458 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.566803 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78c57c9496-sh67d"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.584884 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.731091 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:07 crc kubenswrapper[4991]: I0929 09:59:07.773412 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.214030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerStarted","Data":"ca34df704f9f837e31c1386748c14bebc865c877a8b5e8ff0d34a0c57758a335"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.235398 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jx6z5" event={"ID":"4880dbaa-8b65-4f99-929f-e9613339d1d9","Type":"ContainerStarted","Data":"9854062f100819136737bca7e8e6c6df9e5789c59fa5cef1d55d0eb9d4ee6a4c"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.241503 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c57c9496-sh67d" event={"ID":"4f154fb6-f887-4766-8374-5956d39b267c","Type":"ContainerStarted","Data":"e0c0da6ecb6d3124d509f871383ffb65ecb1f37aed30a3d2734975c661b1963c"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.250473 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" event={"ID":"e668e3d0-8126-4528-8ef5-897d81276642","Type":"ContainerStarted","Data":"6289a39f2f1162d9aa4318894b7cfbac33832fc2185594ce09b43359445cef8a"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.263839 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" event={"ID":"e7db6403-2f05-4d5f-a563-9afc14bbde8e","Type":"ContainerStarted","Data":"8995a6f3d0dfb25c4d83f76dfdf4caf0d4c1e1ab4afa19b9981914d5da1b7513"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.264333 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jx6z5" podStartSLOduration=30.264320592 podStartE2EDuration="30.264320592s" podCreationTimestamp="2025-09-29 09:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:08.261751014 +0000 UTC m=+1284.117679042" watchObservedRunningTime="2025-09-29 09:59:08.264320592 +0000 UTC m=+1284.120248620" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.287206 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerStarted","Data":"4a5584822b7789f5d9733e886a1e9b1101a30f615d086b84eb86706438c9ba60"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.363481 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerStarted","Data":"e4bc131ed73bf9acc3e470c0b49cef0b14aae397d1c0a22c40f1f7cd5283a70e"} Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.431380 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-797678c8fc-75zg9"] Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.436707 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.442151 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.442507 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.451097 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-797678c8fc-75zg9"] Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.477925 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-httpd-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478045 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-public-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478092 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-combined-ca-bundle\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478112 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478139 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx78w\" (UniqueName: \"kubernetes.io/projected/cf4d4464-ee15-4786-beaf-b4fa9d320834-kube-api-access-lx78w\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478195 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-ovndb-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.478311 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-internal-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581143 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-ovndb-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581245 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-internal-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581306 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-httpd-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581353 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-public-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581386 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-combined-ca-bundle\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581402 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.581425 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx78w\" (UniqueName: \"kubernetes.io/projected/cf4d4464-ee15-4786-beaf-b4fa9d320834-kube-api-access-lx78w\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.609937 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-combined-ca-bundle\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.610116 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-httpd-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.610599 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-public-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.611078 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-internal-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.614654 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-ovndb-tls-certs\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.616752 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf4d4464-ee15-4786-beaf-b4fa9d320834-config\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.620389 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx78w\" (UniqueName: \"kubernetes.io/projected/cf4d4464-ee15-4786-beaf-b4fa9d320834-kube-api-access-lx78w\") pod \"neutron-797678c8fc-75zg9\" (UID: \"cf4d4464-ee15-4786-beaf-b4fa9d320834\") " pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:08 crc kubenswrapper[4991]: I0929 09:59:08.802550 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.399891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerStarted","Data":"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.410822 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerStarted","Data":"a8d2dd4d6d7e6b9ec5b1a3646d49889e05ae2c59d6a7a85a54a3075d0e3e9986"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.410862 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerStarted","Data":"b12a9e171d230ee46aeb063882764b4414a5684f9ea5e4eb91d7bdebd3ae4d71"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.411248 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.417296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerStarted","Data":"9df3b465d510b9982309ca84528c774125124f8778ac971714eec2ea50aa6867"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.420127 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerStarted","Data":"023284e6ebf093ce5f781fa5fda61d38852f5b727420b579a7818e3b1f2bcb5e"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.424681 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c57c9496-sh67d" event={"ID":"4f154fb6-f887-4766-8374-5956d39b267c","Type":"ContainerStarted","Data":"68513cb37284afa9b4aa874eef623365a1687b6acd51c51d7f0cd41d4124b9d8"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.424712 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c57c9496-sh67d" event={"ID":"4f154fb6-f887-4766-8374-5956d39b267c","Type":"ContainerStarted","Data":"30faba001828b1ba9f792eea4efcca4079533e1d2380315ad8fe9e4d2617854d"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.425510 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.425549 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.443315 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67bcc55b76-swg9q" podStartSLOduration=4.443294971 podStartE2EDuration="4.443294971s" podCreationTimestamp="2025-09-29 09:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:09.43869484 +0000 UTC m=+1285.294622868" watchObservedRunningTime="2025-09-29 09:59:09.443294971 +0000 UTC m=+1285.299222999" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.455463 4991 generic.go:334] "Generic (PLEG): container finished" podID="e668e3d0-8126-4528-8ef5-897d81276642" containerID="1c226f01305a130981479d03e1b3d7ae6ae6c0e8f89e9f5fa2ed8242378a90c1" exitCode=0 Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.455668 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" event={"ID":"e668e3d0-8126-4528-8ef5-897d81276642","Type":"ContainerDied","Data":"1c226f01305a130981479d03e1b3d7ae6ae6c0e8f89e9f5fa2ed8242378a90c1"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.470179 4991 generic.go:334] "Generic (PLEG): container finished" podID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerID="54ca1fdb917d71fd0c5a1382d0efeb71dd34845ffd5e248afa7cbdd9b9db1ce9" exitCode=0 Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.470722 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" event={"ID":"e7db6403-2f05-4d5f-a563-9afc14bbde8e","Type":"ContainerDied","Data":"54ca1fdb917d71fd0c5a1382d0efeb71dd34845ffd5e248afa7cbdd9b9db1ce9"} Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.480770 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78c57c9496-sh67d" podStartSLOduration=16.480751397 podStartE2EDuration="16.480751397s" podCreationTimestamp="2025-09-29 09:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:09.470355663 +0000 UTC m=+1285.326283701" watchObservedRunningTime="2025-09-29 09:59:09.480751397 +0000 UTC m=+1285.336679415" Sep 29 09:59:09 crc kubenswrapper[4991]: I0929 09:59:09.585253 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-797678c8fc-75zg9"] Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.276040 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.336083 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gm6r\" (UniqueName: \"kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.343527 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r" (OuterVolumeSpecName: "kube-api-access-6gm6r") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "kube-api-access-6gm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.437255 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.437580 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.437756 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.437825 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.437848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0\") pod \"e668e3d0-8126-4528-8ef5-897d81276642\" (UID: \"e668e3d0-8126-4528-8ef5-897d81276642\") " Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.481341 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gm6r\" (UniqueName: \"kubernetes.io/projected/e668e3d0-8126-4528-8ef5-897d81276642-kube-api-access-6gm6r\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.513784 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.521463 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" event={"ID":"e668e3d0-8126-4528-8ef5-897d81276642","Type":"ContainerDied","Data":"6289a39f2f1162d9aa4318894b7cfbac33832fc2185594ce09b43359445cef8a"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.521515 4991 scope.go:117] "RemoveContainer" containerID="1c226f01305a130981479d03e1b3d7ae6ae6c0e8f89e9f5fa2ed8242378a90c1" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.521545 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-xq2lz" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.522493 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.528544 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config" (OuterVolumeSpecName: "config") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.532876 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" event={"ID":"e7db6403-2f05-4d5f-a563-9afc14bbde8e","Type":"ContainerStarted","Data":"b3e321ecddd3ebfecb1243b55e74159d3ae5f86bdf564f1cb485fa91a9cefb96"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.532986 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.538339 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.539261 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e668e3d0-8126-4528-8ef5-897d81276642" (UID: "e668e3d0-8126-4528-8ef5-897d81276642"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.542683 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-797678c8fc-75zg9" event={"ID":"cf4d4464-ee15-4786-beaf-b4fa9d320834","Type":"ContainerStarted","Data":"d327a2907eaf7b36228b27ea6324c4de87f8ea80a4fa7e985dbdf37b4b7c3019"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.542732 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-797678c8fc-75zg9" event={"ID":"cf4d4464-ee15-4786-beaf-b4fa9d320834","Type":"ContainerStarted","Data":"650d5d3b8bf23c7c81974ae9f3831ee7a667a252fc18ecaa44f3aa4a0764508d"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.565170 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerStarted","Data":"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.565332 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-log" containerID="cri-o://8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" gracePeriod=30 Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.565801 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-httpd" containerID="cri-o://6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" gracePeriod=30 Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.581428 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" podStartSLOduration=5.581408386 podStartE2EDuration="5.581408386s" podCreationTimestamp="2025-09-29 09:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:10.556887571 +0000 UTC m=+1286.412815599" watchObservedRunningTime="2025-09-29 09:59:10.581408386 +0000 UTC m=+1286.437336414" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.588278 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-log" containerID="cri-o://9df3b465d510b9982309ca84528c774125124f8778ac971714eec2ea50aa6867" gracePeriod=30 Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.588394 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerStarted","Data":"4d631da1de24f840073a41b4c34b617eb54b71d6d07b3d0fae5c6d5de206799c"} Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590007 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-httpd" containerID="cri-o://4d631da1de24f840073a41b4c34b617eb54b71d6d07b3d0fae5c6d5de206799c" gracePeriod=30 Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590405 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590431 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590439 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590448 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.590455 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e668e3d0-8126-4528-8ef5-897d81276642-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.591940 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.591930132999998 podStartE2EDuration="27.591930133s" podCreationTimestamp="2025-09-29 09:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:10.584734674 +0000 UTC m=+1286.440662702" watchObservedRunningTime="2025-09-29 09:59:10.591930133 +0000 UTC m=+1286.447858161" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.882907 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.882886491 podStartE2EDuration="27.882886491s" podCreationTimestamp="2025-09-29 09:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:10.612890445 +0000 UTC m=+1286.468818473" watchObservedRunningTime="2025-09-29 09:59:10.882886491 +0000 UTC m=+1286.738814519" Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.949227 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:59:10 crc kubenswrapper[4991]: I0929 09:59:10.951662 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-xq2lz"] Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.496607 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.630734 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631134 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631219 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631295 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631322 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631374 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.631404 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfr5h\" (UniqueName: \"kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h\") pod \"2aae40bb-31de-4a1b-996f-c69336e8b2af\" (UID: \"2aae40bb-31de-4a1b-996f-c69336e8b2af\") " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.633430 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.636718 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs" (OuterVolumeSpecName: "logs") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.653199 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.653206 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h" (OuterVolumeSpecName: "kube-api-access-pfr5h") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "kube-api-access-pfr5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.653483 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts" (OuterVolumeSpecName: "scripts") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.672395 4991 generic.go:334] "Generic (PLEG): container finished" podID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerID="4d631da1de24f840073a41b4c34b617eb54b71d6d07b3d0fae5c6d5de206799c" exitCode=0 Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.672434 4991 generic.go:334] "Generic (PLEG): container finished" podID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerID="9df3b465d510b9982309ca84528c774125124f8778ac971714eec2ea50aa6867" exitCode=143 Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.672488 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerDied","Data":"4d631da1de24f840073a41b4c34b617eb54b71d6d07b3d0fae5c6d5de206799c"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.672519 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerDied","Data":"9df3b465d510b9982309ca84528c774125124f8778ac971714eec2ea50aa6867"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.686350 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.729038 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-797678c8fc-75zg9" event={"ID":"cf4d4464-ee15-4786-beaf-b4fa9d320834","Type":"ContainerStarted","Data":"1c2ca86319e67e91a7441068a9e7e8b5792bf2b74cdd65df2eab9f180aa0a364"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.729287 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740033 4991 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740060 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740073 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfr5h\" (UniqueName: \"kubernetes.io/projected/2aae40bb-31de-4a1b-996f-c69336e8b2af-kube-api-access-pfr5h\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740084 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aae40bb-31de-4a1b-996f-c69336e8b2af-logs\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740092 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.740112 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.742844 4991 generic.go:334] "Generic (PLEG): container finished" podID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerID="6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" exitCode=143 Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.742887 4991 generic.go:334] "Generic (PLEG): container finished" podID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerID="8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" exitCode=143 Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.743292 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.743888 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerDied","Data":"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.743921 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerDied","Data":"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.743941 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2aae40bb-31de-4a1b-996f-c69336e8b2af","Type":"ContainerDied","Data":"4a5584822b7789f5d9733e886a1e9b1101a30f615d086b84eb86706438c9ba60"} Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.743977 4991 scope.go:117] "RemoveContainer" containerID="6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.779045 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-797678c8fc-75zg9" podStartSLOduration=3.779019587 podStartE2EDuration="3.779019587s" podCreationTimestamp="2025-09-29 09:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:11.766557209 +0000 UTC m=+1287.622485257" watchObservedRunningTime="2025-09-29 09:59:11.779019587 +0000 UTC m=+1287.634947615" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.808045 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.825437 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data" (OuterVolumeSpecName: "config-data") pod "2aae40bb-31de-4a1b-996f-c69336e8b2af" (UID: "2aae40bb-31de-4a1b-996f-c69336e8b2af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.842150 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:11 crc kubenswrapper[4991]: I0929 09:59:11.842188 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aae40bb-31de-4a1b-996f-c69336e8b2af-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.111360 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.132070 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.157540 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.158032 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158048 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.158065 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e668e3d0-8126-4528-8ef5-897d81276642" containerName="init" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158071 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e668e3d0-8126-4528-8ef5-897d81276642" containerName="init" Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.158105 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158112 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158310 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e668e3d0-8126-4528-8ef5-897d81276642" containerName="init" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158338 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.158350 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.159442 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.162379 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.162932 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.168506 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.255513 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.255912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qq78\" (UniqueName: \"kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.255959 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.256109 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.256284 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.256392 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.256484 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.256736 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.352233 4991 scope.go:117] "RemoveContainer" containerID="8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.359575 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.359869 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.360111 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.360360 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.360498 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.361050 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.360891 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qq78\" (UniqueName: \"kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.361094 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.361121 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.361225 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.361553 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.365488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.367304 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.368269 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.380755 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.382451 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qq78\" (UniqueName: \"kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.446038 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.476276 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.512711 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.525218 4991 scope.go:117] "RemoveContainer" containerID="6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.525680 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955\": container with ID starting with 6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955 not found: ID does not exist" containerID="6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.525711 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955"} err="failed to get container status \"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955\": rpc error: code = NotFound desc = could not find container \"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955\": container with ID starting with 6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955 not found: ID does not exist" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.525731 4991 scope.go:117] "RemoveContainer" containerID="8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.526109 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20\": container with ID starting with 8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20 not found: ID does not exist" containerID="8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.526131 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20"} err="failed to get container status \"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20\": rpc error: code = NotFound desc = could not find container \"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20\": container with ID starting with 8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20 not found: ID does not exist" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.526145 4991 scope.go:117] "RemoveContainer" containerID="6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.526413 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955"} err="failed to get container status \"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955\": rpc error: code = NotFound desc = could not find container \"6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955\": container with ID starting with 6ad52661cdbd0bad6894a84e61a6838c6c18326754a9d71a0ec85c8234416955 not found: ID does not exist" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.526429 4991 scope.go:117] "RemoveContainer" containerID="8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.526697 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20"} err="failed to get container status \"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20\": rpc error: code = NotFound desc = could not find container \"8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20\": container with ID starting with 8756ec52a0ce4985ac10615c59ec8034f7735c9d7d1a15594ab881dd4b3b1b20 not found: ID does not exist" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.567278 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.567607 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.567719 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs" (OuterVolumeSpecName: "logs") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.567739 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.567807 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.568059 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.568114 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45p5r\" (UniqueName: \"kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.568158 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle\") pod \"0aadd89a-f76a-42be-a583-6750b4b52d58\" (UID: \"0aadd89a-f76a-42be-a583-6750b4b52d58\") " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.568710 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.576238 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.576649 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-logs\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.576700 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.576711 4991 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aadd89a-f76a-42be-a583-6750b4b52d58-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.591522 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r" (OuterVolumeSpecName: "kube-api-access-45p5r") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "kube-api-access-45p5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.594669 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts" (OuterVolumeSpecName: "scripts") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.617107 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.635078 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data" (OuterVolumeSpecName: "config-data") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.664492 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aadd89a-f76a-42be-a583-6750b4b52d58" (UID: "0aadd89a-f76a-42be-a583-6750b4b52d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.678893 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.678939 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.678970 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45p5r\" (UniqueName: \"kubernetes.io/projected/0aadd89a-f76a-42be-a583-6750b4b52d58-kube-api-access-45p5r\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.678987 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.679000 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aadd89a-f76a-42be-a583-6750b4b52d58-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.760686 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aadd89a-f76a-42be-a583-6750b4b52d58","Type":"ContainerDied","Data":"ca34df704f9f837e31c1386748c14bebc865c877a8b5e8ff0d34a0c57758a335"} Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.760731 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.760742 4991 scope.go:117] "RemoveContainer" containerID="4d631da1de24f840073a41b4c34b617eb54b71d6d07b3d0fae5c6d5de206799c" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.833545 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.846456 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.855128 4991 scope.go:117] "RemoveContainer" containerID="9df3b465d510b9982309ca84528c774125124f8778ac971714eec2ea50aa6867" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.863367 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.864068 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.864086 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: E0929 09:59:12.864108 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.864114 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.864303 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-httpd" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.864333 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" containerName="glance-log" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.876850 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.879377 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.879519 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.892478 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.971791 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aadd89a-f76a-42be-a583-6750b4b52d58" path="/var/lib/kubelet/pods/0aadd89a-f76a-42be-a583-6750b4b52d58/volumes" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.979423 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aae40bb-31de-4a1b-996f-c69336e8b2af" path="/var/lib/kubelet/pods/2aae40bb-31de-4a1b-996f-c69336e8b2af/volumes" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.984188 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e668e3d0-8126-4528-8ef5-897d81276642" path="/var/lib/kubelet/pods/e668e3d0-8126-4528-8ef5-897d81276642/volumes" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985489 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985599 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985640 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985677 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985715 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985765 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7kf\" (UniqueName: \"kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.985802 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:12 crc kubenswrapper[4991]: I0929 09:59:12.986090 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088664 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088792 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088854 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088884 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088910 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.088936 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.089372 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.089496 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.089621 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7kf\" (UniqueName: \"kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.089713 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.090205 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.095498 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.096825 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.104972 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.108003 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.119911 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7kf\" (UniqueName: \"kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.125809 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.148204 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " pod="openstack/glance-default-internal-api-0" Sep 29 09:59:13 crc kubenswrapper[4991]: I0929 09:59:13.201237 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:14 crc kubenswrapper[4991]: W0929 09:59:14.470553 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31b1dc43_5df1_49fb_aeeb_3f31a64f95bf.slice/crio-ad0bd582fc88427d73f9c699a3b4f7b9889b5f3cd01326c86b198c91e6e5b99d WatchSource:0}: Error finding container ad0bd582fc88427d73f9c699a3b4f7b9889b5f3cd01326c86b198c91e6e5b99d: Status 404 returned error can't find the container with id ad0bd582fc88427d73f9c699a3b4f7b9889b5f3cd01326c86b198c91e6e5b99d Sep 29 09:59:14 crc kubenswrapper[4991]: I0929 09:59:14.798144 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerStarted","Data":"ad0bd582fc88427d73f9c699a3b4f7b9889b5f3cd01326c86b198c91e6e5b99d"} Sep 29 09:59:14 crc kubenswrapper[4991]: I0929 09:59:14.800019 4991 generic.go:334] "Generic (PLEG): container finished" podID="4880dbaa-8b65-4f99-929f-e9613339d1d9" containerID="9854062f100819136737bca7e8e6c6df9e5789c59fa5cef1d55d0eb9d4ee6a4c" exitCode=0 Sep 29 09:59:14 crc kubenswrapper[4991]: I0929 09:59:14.800071 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jx6z5" event={"ID":"4880dbaa-8b65-4f99-929f-e9613339d1d9","Type":"ContainerDied","Data":"9854062f100819136737bca7e8e6c6df9e5789c59fa5cef1d55d0eb9d4ee6a4c"} Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.438123 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.528331 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.528531 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="dnsmasq-dns" containerID="cri-o://e052251e6dcfdf766f57a89ab122fb953d8cada922330e1913712c805b9cd1ad" gracePeriod=10 Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.839912 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.884734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jx6z5" event={"ID":"4880dbaa-8b65-4f99-929f-e9613339d1d9","Type":"ContainerDied","Data":"5c4e9ccb95f3f2b50a4bca8ecbf2831496a232f4dd40ccb641db8ba6d985a940"} Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.884769 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c4e9ccb95f3f2b50a4bca8ecbf2831496a232f4dd40ccb641db8ba6d985a940" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.884808 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jx6z5" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.889523 4991 generic.go:334] "Generic (PLEG): container finished" podID="73447648-89a4-40e2-8645-279058b94921" containerID="e052251e6dcfdf766f57a89ab122fb953d8cada922330e1913712c805b9cd1ad" exitCode=0 Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.889554 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" event={"ID":"73447648-89a4-40e2-8645-279058b94921","Type":"ContainerDied","Data":"e052251e6dcfdf766f57a89ab122fb953d8cada922330e1913712c805b9cd1ad"} Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897120 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz4dd\" (UniqueName: \"kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897181 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897210 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897300 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897359 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.897496 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data\") pod \"4880dbaa-8b65-4f99-929f-e9613339d1d9\" (UID: \"4880dbaa-8b65-4f99-929f-e9613339d1d9\") " Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.906234 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts" (OuterVolumeSpecName: "scripts") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.907559 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd" (OuterVolumeSpecName: "kube-api-access-mz4dd") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "kube-api-access-mz4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.910517 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.911150 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.962555 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:16 crc kubenswrapper[4991]: I0929 09:59:16.988272 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data" (OuterVolumeSpecName: "config-data") pod "4880dbaa-8b65-4f99-929f-e9613339d1d9" (UID: "4880dbaa-8b65-4f99-929f-e9613339d1d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000324 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000354 4991 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000365 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000375 4991 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000383 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4880dbaa-8b65-4f99-929f-e9613339d1d9-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.000391 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz4dd\" (UniqueName: \"kubernetes.io/projected/4880dbaa-8b65-4f99-929f-e9613339d1d9-kube-api-access-mz4dd\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.363726 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.373223 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.407121 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.407616 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq79l\" (UniqueName: \"kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.407833 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.407941 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.408038 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.408110 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0\") pod \"73447648-89a4-40e2-8645-279058b94921\" (UID: \"73447648-89a4-40e2-8645-279058b94921\") " Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.415266 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l" (OuterVolumeSpecName: "kube-api-access-sq79l") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "kube-api-access-sq79l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.512430 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.525103 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.525151 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq79l\" (UniqueName: \"kubernetes.io/projected/73447648-89a4-40e2-8645-279058b94921-kube-api-access-sq79l\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.549713 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.585652 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.586543 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.620461 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config" (OuterVolumeSpecName: "config") pod "73447648-89a4-40e2-8645-279058b94921" (UID: "73447648-89a4-40e2-8645-279058b94921"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.628097 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.628214 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.628229 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.628243 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73447648-89a4-40e2-8645-279058b94921-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.909441 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerStarted","Data":"426b586cfc07b8c9136f4451cc76e43f50158eccfbe1d6d84c9b989b2bf35d9e"} Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.911961 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerStarted","Data":"4b40a6b965b59dfc13d333c7114fad60012f84b46aed1e9995e8620cda3a04f6"} Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.914606 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerStarted","Data":"fab0189d285af21120d8c57ac2070f133cd88d399fda0aa96fa4a36459fbdc16"} Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.929815 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" event={"ID":"73447648-89a4-40e2-8645-279058b94921","Type":"ContainerDied","Data":"58542e18307be88f909cfc7a3eff7870d7500a62821c5e3c0117060169458afc"} Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.930383 4991 scope.go:117] "RemoveContainer" containerID="e052251e6dcfdf766f57a89ab122fb953d8cada922330e1913712c805b9cd1ad" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.930491 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-9ndtq" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.954139 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74c8bd59d-2nxsz"] Sep 29 09:59:17 crc kubenswrapper[4991]: E0929 09:59:17.956702 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="dnsmasq-dns" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.987282 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="dnsmasq-dns" Sep 29 09:59:17 crc kubenswrapper[4991]: E0929 09:59:17.997641 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4880dbaa-8b65-4f99-929f-e9613339d1d9" containerName="keystone-bootstrap" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.997849 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4880dbaa-8b65-4f99-929f-e9613339d1d9" containerName="keystone-bootstrap" Sep 29 09:59:17 crc kubenswrapper[4991]: E0929 09:59:17.997978 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="init" Sep 29 09:59:17 crc kubenswrapper[4991]: I0929 09:59:17.998050 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="init" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.032629 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4880dbaa-8b65-4f99-929f-e9613339d1d9" containerName="keystone-bootstrap" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.032701 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="73447648-89a4-40e2-8645-279058b94921" containerName="dnsmasq-dns" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.033905 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6ddmh" event={"ID":"2ed2429f-e06f-4c9f-9c92-84203d8073c1","Type":"ContainerStarted","Data":"2be37942a036c86434a9a8a44b5e03bf789b3b99a06252d7d0bb3b6bfacf42f2"} Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.036114 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.040264 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rf9rz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.040464 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.040268 4991 scope.go:117] "RemoveContainer" containerID="1fb17ca1e2480099597a9a8228cfe40424b6d0d09b1a85165c8c8338457ace69" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.040764 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.040940 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.041174 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.041398 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.072981 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74c8bd59d-2nxsz"] Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.095187 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.107995 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6ddmh" podStartSLOduration=5.935191091 podStartE2EDuration="45.107977823s" podCreationTimestamp="2025-09-29 09:58:33 +0000 UTC" firstStartedPulling="2025-09-29 09:58:37.386218652 +0000 UTC m=+1253.242146680" lastFinishedPulling="2025-09-29 09:59:16.559005384 +0000 UTC m=+1292.414933412" observedRunningTime="2025-09-29 09:59:18.067711083 +0000 UTC m=+1293.923639111" watchObservedRunningTime="2025-09-29 09:59:18.107977823 +0000 UTC m=+1293.963905851" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.108400 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-9ndtq"] Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.142912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-combined-ca-bundle\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.142986 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-fernet-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143037 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-internal-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143071 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-public-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143177 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-credential-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143382 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-scripts\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143466 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjf6\" (UniqueName: \"kubernetes.io/projected/2ff464af-ef5c-461b-aded-788ef2061aa6-kube-api-access-dtjf6\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.143491 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-config-data\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246518 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjf6\" (UniqueName: \"kubernetes.io/projected/2ff464af-ef5c-461b-aded-788ef2061aa6-kube-api-access-dtjf6\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246575 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-config-data\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246671 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-combined-ca-bundle\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246687 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-fernet-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246735 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-internal-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246770 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-public-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246811 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-credential-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.246851 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-scripts\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.255097 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-fernet-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.255739 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-credential-keys\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.257365 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-config-data\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.257867 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-scripts\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.260133 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-public-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.260387 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-internal-tls-certs\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.260534 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff464af-ef5c-461b-aded-788ef2061aa6-combined-ca-bundle\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.267381 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjf6\" (UniqueName: \"kubernetes.io/projected/2ff464af-ef5c-461b-aded-788ef2061aa6-kube-api-access-dtjf6\") pod \"keystone-74c8bd59d-2nxsz\" (UID: \"2ff464af-ef5c-461b-aded-788ef2061aa6\") " pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.538910 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:18 crc kubenswrapper[4991]: I0929 09:59:18.944335 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73447648-89a4-40e2-8645-279058b94921" path="/var/lib/kubelet/pods/73447648-89a4-40e2-8645-279058b94921/volumes" Sep 29 09:59:19 crc kubenswrapper[4991]: I0929 09:59:19.035766 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerStarted","Data":"de644f415be6e2fb056f4be7401f56c30acc7ca8a89500cac1ae3ce86a10d074"} Sep 29 09:59:19 crc kubenswrapper[4991]: I0929 09:59:19.047803 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zhvvg" event={"ID":"429e29da-7361-452f-8a5e-f42633c6d4b9","Type":"ContainerStarted","Data":"92f96f3ca218a052b2c68f302299970a7f774ed2d13706e84964fd2a91cc508e"} Sep 29 09:59:19 crc kubenswrapper[4991]: I0929 09:59:19.064620 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.06460096 podStartE2EDuration="7.06460096s" podCreationTimestamp="2025-09-29 09:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:19.063782768 +0000 UTC m=+1294.919710796" watchObservedRunningTime="2025-09-29 09:59:19.06460096 +0000 UTC m=+1294.920528988" Sep 29 09:59:19 crc kubenswrapper[4991]: I0929 09:59:19.079483 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerStarted","Data":"66936dbde83cf16544935f7f9c802f5ebc85010da4e4c10a9b12986eaa3a427e"} Sep 29 09:59:19 crc kubenswrapper[4991]: I0929 09:59:19.275103 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74c8bd59d-2nxsz"] Sep 29 09:59:19 crc kubenswrapper[4991]: W0929 09:59:19.298145 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff464af_ef5c_461b_aded_788ef2061aa6.slice/crio-e6f29ec1ff5f64e76cf7149fdcccd6daecb42cc1c695009d43271b265bdce16c WatchSource:0}: Error finding container e6f29ec1ff5f64e76cf7149fdcccd6daecb42cc1c695009d43271b265bdce16c: Status 404 returned error can't find the container with id e6f29ec1ff5f64e76cf7149fdcccd6daecb42cc1c695009d43271b265bdce16c Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.115872 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74c8bd59d-2nxsz" event={"ID":"2ff464af-ef5c-461b-aded-788ef2061aa6","Type":"ContainerStarted","Data":"578a70445a0fec31a49c167d7f2bf0b6bc63391f84cb0ec132e060b9842bed5a"} Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.115985 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74c8bd59d-2nxsz" event={"ID":"2ff464af-ef5c-461b-aded-788ef2061aa6","Type":"ContainerStarted","Data":"e6f29ec1ff5f64e76cf7149fdcccd6daecb42cc1c695009d43271b265bdce16c"} Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.116025 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.120135 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerStarted","Data":"4c45554525b032e47ddecb76aa78409794fe183d471cbe5b75911417750365f3"} Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.153491 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74c8bd59d-2nxsz" podStartSLOduration=3.153467899 podStartE2EDuration="3.153467899s" podCreationTimestamp="2025-09-29 09:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:20.131449669 +0000 UTC m=+1295.987377697" watchObservedRunningTime="2025-09-29 09:59:20.153467899 +0000 UTC m=+1296.009395927" Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.156219 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zhvvg" podStartSLOduration=3.390221157 podStartE2EDuration="47.156209511s" podCreationTimestamp="2025-09-29 09:58:33 +0000 UTC" firstStartedPulling="2025-09-29 09:58:34.689528046 +0000 UTC m=+1250.545456074" lastFinishedPulling="2025-09-29 09:59:18.4555164 +0000 UTC m=+1294.311444428" observedRunningTime="2025-09-29 09:59:20.150700216 +0000 UTC m=+1296.006628254" watchObservedRunningTime="2025-09-29 09:59:20.156209511 +0000 UTC m=+1296.012137539" Sep 29 09:59:20 crc kubenswrapper[4991]: I0929 09:59:20.179470 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.179448363 podStartE2EDuration="8.179448363s" podCreationTimestamp="2025-09-29 09:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:20.171490353 +0000 UTC m=+1296.027418391" watchObservedRunningTime="2025-09-29 09:59:20.179448363 +0000 UTC m=+1296.035376391" Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.146470 4991 generic.go:334] "Generic (PLEG): container finished" podID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" containerID="2be37942a036c86434a9a8a44b5e03bf789b3b99a06252d7d0bb3b6bfacf42f2" exitCode=0 Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.146668 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6ddmh" event={"ID":"2ed2429f-e06f-4c9f-9c92-84203d8073c1","Type":"ContainerDied","Data":"2be37942a036c86434a9a8a44b5e03bf789b3b99a06252d7d0bb3b6bfacf42f2"} Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.476716 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.476755 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.621707 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 29 09:59:22 crc kubenswrapper[4991]: I0929 09:59:22.622227 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.157248 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.157906 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.202147 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.202213 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.268397 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:23 crc kubenswrapper[4991]: I0929 09:59:23.269772 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.189698 4991 generic.go:334] "Generic (PLEG): container finished" podID="429e29da-7361-452f-8a5e-f42633c6d4b9" containerID="92f96f3ca218a052b2c68f302299970a7f774ed2d13706e84964fd2a91cc508e" exitCode=0 Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.189789 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zhvvg" event={"ID":"429e29da-7361-452f-8a5e-f42633c6d4b9","Type":"ContainerDied","Data":"92f96f3ca218a052b2c68f302299970a7f774ed2d13706e84964fd2a91cc508e"} Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.191583 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.191655 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.764071 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.820575 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle\") pod \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.821456 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldrl5\" (UniqueName: \"kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5\") pod \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.821912 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data\") pod \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\" (UID: \"2ed2429f-e06f-4c9f-9c92-84203d8073c1\") " Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.826922 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5" (OuterVolumeSpecName: "kube-api-access-ldrl5") pod "2ed2429f-e06f-4c9f-9c92-84203d8073c1" (UID: "2ed2429f-e06f-4c9f-9c92-84203d8073c1"). InnerVolumeSpecName "kube-api-access-ldrl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.829814 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ed2429f-e06f-4c9f-9c92-84203d8073c1" (UID: "2ed2429f-e06f-4c9f-9c92-84203d8073c1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.865802 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ed2429f-e06f-4c9f-9c92-84203d8073c1" (UID: "2ed2429f-e06f-4c9f-9c92-84203d8073c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.925275 4991 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.925336 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed2429f-e06f-4c9f-9c92-84203d8073c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:24 crc kubenswrapper[4991]: I0929 09:59:24.925352 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldrl5\" (UniqueName: \"kubernetes.io/projected/2ed2429f-e06f-4c9f-9c92-84203d8073c1-kube-api-access-ldrl5\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:25 crc kubenswrapper[4991]: I0929 09:59:25.203829 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6ddmh" event={"ID":"2ed2429f-e06f-4c9f-9c92-84203d8073c1","Type":"ContainerDied","Data":"391b6d6d3d5c0ff2bbcb64b7eb23e72acd134638a620ce76ff92ee27cbf3f8b7"} Sep 29 09:59:25 crc kubenswrapper[4991]: I0929 09:59:25.203883 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391b6d6d3d5c0ff2bbcb64b7eb23e72acd134638a620ce76ff92ee27cbf3f8b7" Sep 29 09:59:25 crc kubenswrapper[4991]: I0929 09:59:25.204193 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6ddmh" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.032743 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c569f5685-dqm5h"] Sep 29 09:59:26 crc kubenswrapper[4991]: E0929 09:59:26.033374 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" containerName="barbican-db-sync" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.033395 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" containerName="barbican-db-sync" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.033681 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" containerName="barbican-db-sync" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.036778 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.038382 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.039300 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.039467 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ws9np" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.076027 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f48988744-sgtlp"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.078081 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.095095 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c569f5685-dqm5h"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.103788 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.119138 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f48988744-sgtlp"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.156839 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data-custom\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.156943 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244c0192-f2fc-4987-a2f5-2070d703c35c-logs\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157093 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4924ede-bd54-46eb-ae9f-90f018709c3c-logs\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157146 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-combined-ca-bundle\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157188 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-combined-ca-bundle\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157221 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grn45\" (UniqueName: \"kubernetes.io/projected/c4924ede-bd54-46eb-ae9f-90f018709c3c-kube-api-access-grn45\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157290 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157337 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqdnq\" (UniqueName: \"kubernetes.io/projected/244c0192-f2fc-4987-a2f5-2070d703c35c-kube-api-access-wqdnq\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157416 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.157498 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data-custom\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.192482 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.195303 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.213330 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260393 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260480 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260511 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260546 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260593 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260624 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data-custom\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260676 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data-custom\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260733 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244c0192-f2fc-4987-a2f5-2070d703c35c-logs\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260775 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24fv7\" (UniqueName: \"kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260806 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260845 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4924ede-bd54-46eb-ae9f-90f018709c3c-logs\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260892 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-combined-ca-bundle\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260932 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-combined-ca-bundle\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.260980 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grn45\" (UniqueName: \"kubernetes.io/projected/c4924ede-bd54-46eb-ae9f-90f018709c3c-kube-api-access-grn45\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.261032 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.261076 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqdnq\" (UniqueName: \"kubernetes.io/projected/244c0192-f2fc-4987-a2f5-2070d703c35c-kube-api-access-wqdnq\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.269842 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-combined-ca-bundle\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.270126 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4924ede-bd54-46eb-ae9f-90f018709c3c-logs\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.270601 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244c0192-f2fc-4987-a2f5-2070d703c35c-logs\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.274267 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data-custom\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.275837 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-combined-ca-bundle\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.276098 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data-custom\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.296276 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244c0192-f2fc-4987-a2f5-2070d703c35c-config-data\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.308142 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.330322 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4924ede-bd54-46eb-ae9f-90f018709c3c-config-data\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.330798 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.330906 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqdnq\" (UniqueName: \"kubernetes.io/projected/244c0192-f2fc-4987-a2f5-2070d703c35c-kube-api-access-wqdnq\") pod \"barbican-keystone-listener-5f48988744-sgtlp\" (UID: \"244c0192-f2fc-4987-a2f5-2070d703c35c\") " pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.332166 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grn45\" (UniqueName: \"kubernetes.io/projected/c4924ede-bd54-46eb-ae9f-90f018709c3c-kube-api-access-grn45\") pod \"barbican-worker-6c569f5685-dqm5h\" (UID: \"c4924ede-bd54-46eb-ae9f-90f018709c3c\") " pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.358103 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c569f5685-dqm5h" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.358741 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375252 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375318 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375361 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375400 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375454 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.375510 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xnhd\" (UniqueName: \"kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.376701 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.379162 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381065 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381128 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381365 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24fv7\" (UniqueName: \"kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381405 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381443 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381589 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381657 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.381793 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.383207 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.408957 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.414334 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24fv7\" (UniqueName: \"kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7\") pod \"dnsmasq-dns-688c87cc99-b78vv\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.484249 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.484310 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.484362 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.484443 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xnhd\" (UniqueName: \"kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.484510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.485400 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.499515 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.502740 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xnhd\" (UniqueName: \"kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.509048 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.514742 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data\") pod \"barbican-api-66596f498-zwm7z\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.546694 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:26 crc kubenswrapper[4991]: I0929 09:59:26.566436 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.561487 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zhvvg" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.725171 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle\") pod \"429e29da-7361-452f-8a5e-f42633c6d4b9\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.726869 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data\") pod \"429e29da-7361-452f-8a5e-f42633c6d4b9\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.727667 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxdr\" (UniqueName: \"kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr\") pod \"429e29da-7361-452f-8a5e-f42633c6d4b9\" (UID: \"429e29da-7361-452f-8a5e-f42633c6d4b9\") " Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.746448 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr" (OuterVolumeSpecName: "kube-api-access-8dxdr") pod "429e29da-7361-452f-8a5e-f42633c6d4b9" (UID: "429e29da-7361-452f-8a5e-f42633c6d4b9"). InnerVolumeSpecName "kube-api-access-8dxdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.828335 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "429e29da-7361-452f-8a5e-f42633c6d4b9" (UID: "429e29da-7361-452f-8a5e-f42633c6d4b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.863084 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.863118 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxdr\" (UniqueName: \"kubernetes.io/projected/429e29da-7361-452f-8a5e-f42633c6d4b9-kube-api-access-8dxdr\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.928289 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data" (OuterVolumeSpecName: "config-data") pod "429e29da-7361-452f-8a5e-f42633c6d4b9" (UID: "429e29da-7361-452f-8a5e-f42633c6d4b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:27 crc kubenswrapper[4991]: I0929 09:59:27.966183 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/429e29da-7361-452f-8a5e-f42633c6d4b9-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.280923 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zhvvg" event={"ID":"429e29da-7361-452f-8a5e-f42633c6d4b9","Type":"ContainerDied","Data":"8ffc85d36ff3893b4521efce289317c7a15613e6cc83a2a4443025b8cdcb63f0"} Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.281257 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ffc85d36ff3893b4521efce289317c7a15613e6cc83a2a4443025b8cdcb63f0" Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.281035 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zhvvg" Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.329845 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.350235 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.372862 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78c57c9496-sh67d" Sep 29 09:59:28 crc kubenswrapper[4991]: E0929 09:59:28.385334 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.832872 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c569f5685-dqm5h"] Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.849995 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f48988744-sgtlp"] Sep 29 09:59:28 crc kubenswrapper[4991]: I0929 09:59:28.979210 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.049117 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.049486 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.118712 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.118806 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.128461 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.316500 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.376028 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7695fcf79b-8n9gp"] Sep 29 09:59:29 crc kubenswrapper[4991]: E0929 09:59:29.376528 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" containerName="heat-db-sync" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.376540 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" containerName="heat-db-sync" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.376754 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" containerName="heat-db-sync" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.377904 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.387498 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.387701 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.392978 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7695fcf79b-8n9gp"] Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.469617 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" event={"ID":"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929","Type":"ContainerStarted","Data":"82e3725d7215bd3de7a2216ff0a71cf442eeb1bf76c6a235dc259b899547516e"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.532305 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-internal-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.532907 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-public-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.532972 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25n8\" (UniqueName: \"kubernetes.io/projected/f631ddcf-022d-461f-8f4b-421ffb478b6f-kube-api-access-b25n8\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.533073 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-combined-ca-bundle\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.533135 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f631ddcf-022d-461f-8f4b-421ffb478b6f-logs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.533240 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data-custom\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.533259 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.554150 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerStarted","Data":"ea11fbcca21e61a86b77427fe0d1802d2448bd7b92114facc6e15537c9b23c2b"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.555462 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.598988 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" event={"ID":"244c0192-f2fc-4987-a2f5-2070d703c35c","Type":"ContainerStarted","Data":"344c8835feb7cf6173c4cec65356caa3d64e92c7cde126e7b16e58b326c37a4f"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.621148 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5lhk" event={"ID":"5146a80a-6cba-46a8-85b8-1fb0dd8304cd","Type":"ContainerStarted","Data":"d07ae8509a425d6dce6b0fc826409f04218e45185fa879870b56ea152e6e382d"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636137 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-internal-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636193 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-public-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636219 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25n8\" (UniqueName: \"kubernetes.io/projected/f631ddcf-022d-461f-8f4b-421ffb478b6f-kube-api-access-b25n8\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636290 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-combined-ca-bundle\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636338 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f631ddcf-022d-461f-8f4b-421ffb478b6f-logs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636416 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data-custom\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.636444 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.639734 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f631ddcf-022d-461f-8f4b-421ffb478b6f-logs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.642334 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c569f5685-dqm5h" event={"ID":"c4924ede-bd54-46eb-ae9f-90f018709c3c","Type":"ContainerStarted","Data":"085ff46d545c2fece1b35006d41cdb8950d71fff2bc59f89248aaed38f1b80a8"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.649834 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-combined-ca-bundle\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.682738 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data-custom\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.696516 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-public-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.698113 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d5lhk" podStartSLOduration=3.398591679 podStartE2EDuration="56.698085849s" podCreationTimestamp="2025-09-29 09:58:33 +0000 UTC" firstStartedPulling="2025-09-29 09:58:34.447960318 +0000 UTC m=+1250.303888346" lastFinishedPulling="2025-09-29 09:59:27.747454498 +0000 UTC m=+1303.603382516" observedRunningTime="2025-09-29 09:59:29.653178857 +0000 UTC m=+1305.509106875" watchObservedRunningTime="2025-09-29 09:59:29.698085849 +0000 UTC m=+1305.554013877" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.707192 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-internal-tls-certs\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.707246 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ddcf-022d-461f-8f4b-421ffb478b6f-config-data\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.708094 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerStarted","Data":"f9405b39bee07a2d6c6e89f9ddb1adcaba1728bcf10e7120e63262d5b0df5e44"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.708165 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerStarted","Data":"b07c212e267167a7a642d9f370dc7fc494a75e86f742de5fa805caa6d7977f4a"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.708181 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerStarted","Data":"c1dc2c86b2197a5da50f86d971be10c8b8786842d403aabb7c1d0e696d0c1286"} Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.709580 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.709621 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.725600 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25n8\" (UniqueName: \"kubernetes.io/projected/f631ddcf-022d-461f-8f4b-421ffb478b6f-kube-api-access-b25n8\") pod \"barbican-api-7695fcf79b-8n9gp\" (UID: \"f631ddcf-022d-461f-8f4b-421ffb478b6f\") " pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.753512 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:29 crc kubenswrapper[4991]: I0929 09:59:29.759737 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66596f498-zwm7z" podStartSLOduration=3.759710429 podStartE2EDuration="3.759710429s" podCreationTimestamp="2025-09-29 09:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:29.739914469 +0000 UTC m=+1305.595842507" watchObservedRunningTime="2025-09-29 09:59:29.759710429 +0000 UTC m=+1305.615638467" Sep 29 09:59:30 crc kubenswrapper[4991]: I0929 09:59:30.399131 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7695fcf79b-8n9gp"] Sep 29 09:59:30 crc kubenswrapper[4991]: I0929 09:59:30.754217 4991 generic.go:334] "Generic (PLEG): container finished" podID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerID="0487f234d318c995223a0cda835dacfedee78964290a7ef0411746a10c6880db" exitCode=0 Sep 29 09:59:30 crc kubenswrapper[4991]: I0929 09:59:30.754932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" event={"ID":"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929","Type":"ContainerDied","Data":"0487f234d318c995223a0cda835dacfedee78964290a7ef0411746a10c6880db"} Sep 29 09:59:30 crc kubenswrapper[4991]: I0929 09:59:30.760666 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7695fcf79b-8n9gp" event={"ID":"f631ddcf-022d-461f-8f4b-421ffb478b6f","Type":"ContainerStarted","Data":"58ed0f1cf2e7110ae022602739c7ff0cd64afaa9fe61eef8f8af8dbc36c6aa44"} Sep 29 09:59:30 crc kubenswrapper[4991]: I0929 09:59:30.760712 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7695fcf79b-8n9gp" event={"ID":"f631ddcf-022d-461f-8f4b-421ffb478b6f","Type":"ContainerStarted","Data":"809b8327d421fb1a4548ec2f93b0b3f2edc94e3bc9a273b25146dc346bdefdb3"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.800482 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" event={"ID":"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929","Type":"ContainerStarted","Data":"753c35ff32110dc9fa0d06f41fb4ea63c15283cc2cf337a6ab30691076854718"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.801860 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.803309 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerStarted","Data":"203ef20dc8abb9caaf2cd44cc5f1a5b8e7eeb8f0204d95e557656eab46fc0d17"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.806760 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" event={"ID":"244c0192-f2fc-4987-a2f5-2070d703c35c","Type":"ContainerStarted","Data":"079a90cf1db59e3d4a8336092d888aeacf66d0289d4c240777fbbb44fdfea32e"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.806804 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" event={"ID":"244c0192-f2fc-4987-a2f5-2070d703c35c","Type":"ContainerStarted","Data":"e0fd7de221da7bfa3d868a0599428d87f99d8ca078c842ebef459a3dee3f63b8"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.808432 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c569f5685-dqm5h" event={"ID":"c4924ede-bd54-46eb-ae9f-90f018709c3c","Type":"ContainerStarted","Data":"5704a44c9841a06f5315bb635c588890458e9538585d8de5185fa47f0fb912a0"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.808478 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c569f5685-dqm5h" event={"ID":"c4924ede-bd54-46eb-ae9f-90f018709c3c","Type":"ContainerStarted","Data":"b0181e04e4466408250f9639fcf48e92a84825258b0b0da89454103df9584555"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.811444 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7695fcf79b-8n9gp" event={"ID":"f631ddcf-022d-461f-8f4b-421ffb478b6f","Type":"ContainerStarted","Data":"0826dcf24a3c845273f4538cf213fb55c8a964118beac54c7119638a6a1e2414"} Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.811681 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.826870 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" podStartSLOduration=6.8268228749999995 podStartE2EDuration="6.826822875s" podCreationTimestamp="2025-09-29 09:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:32.823546249 +0000 UTC m=+1308.679474287" watchObservedRunningTime="2025-09-29 09:59:32.826822875 +0000 UTC m=+1308.682750943" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.856234 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7695fcf79b-8n9gp" podStartSLOduration=3.856198229 podStartE2EDuration="3.856198229s" podCreationTimestamp="2025-09-29 09:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:32.844437819 +0000 UTC m=+1308.700365847" watchObservedRunningTime="2025-09-29 09:59:32.856198229 +0000 UTC m=+1308.712126267" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.866378 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c569f5685-dqm5h" podStartSLOduration=4.998192887 podStartE2EDuration="7.866359766s" podCreationTimestamp="2025-09-29 09:59:25 +0000 UTC" firstStartedPulling="2025-09-29 09:59:28.809759908 +0000 UTC m=+1304.665687926" lastFinishedPulling="2025-09-29 09:59:31.677926777 +0000 UTC m=+1307.533854805" observedRunningTime="2025-09-29 09:59:32.865859473 +0000 UTC m=+1308.721787501" watchObservedRunningTime="2025-09-29 09:59:32.866359766 +0000 UTC m=+1308.722287794" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.928476 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.661856027 podStartE2EDuration="58.92844119s" podCreationTimestamp="2025-09-29 09:58:34 +0000 UTC" firstStartedPulling="2025-09-29 09:58:37.367794777 +0000 UTC m=+1253.223722815" lastFinishedPulling="2025-09-29 09:59:31.63437995 +0000 UTC m=+1307.490307978" observedRunningTime="2025-09-29 09:59:32.923708155 +0000 UTC m=+1308.779636193" watchObservedRunningTime="2025-09-29 09:59:32.92844119 +0000 UTC m=+1308.784369218" Sep 29 09:59:32 crc kubenswrapper[4991]: I0929 09:59:32.932435 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f48988744-sgtlp" podStartSLOduration=4.138162791 podStartE2EDuration="6.932412684s" podCreationTimestamp="2025-09-29 09:59:26 +0000 UTC" firstStartedPulling="2025-09-29 09:59:28.869930942 +0000 UTC m=+1304.725858980" lastFinishedPulling="2025-09-29 09:59:31.664180845 +0000 UTC m=+1307.520108873" observedRunningTime="2025-09-29 09:59:32.898482761 +0000 UTC m=+1308.754410809" watchObservedRunningTime="2025-09-29 09:59:32.932412684 +0000 UTC m=+1308.788340712" Sep 29 09:59:33 crc kubenswrapper[4991]: I0929 09:59:33.824027 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:35 crc kubenswrapper[4991]: I0929 09:59:35.842532 4991 generic.go:334] "Generic (PLEG): container finished" podID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" containerID="d07ae8509a425d6dce6b0fc826409f04218e45185fa879870b56ea152e6e382d" exitCode=0 Sep 29 09:59:35 crc kubenswrapper[4991]: I0929 09:59:35.842622 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5lhk" event={"ID":"5146a80a-6cba-46a8-85b8-1fb0dd8304cd","Type":"ContainerDied","Data":"d07ae8509a425d6dce6b0fc826409f04218e45185fa879870b56ea152e6e382d"} Sep 29 09:59:36 crc kubenswrapper[4991]: I0929 09:59:36.463408 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.406156 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519094 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519247 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519299 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519361 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519385 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519437 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.519483 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8mbc\" (UniqueName: \"kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc\") pod \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\" (UID: \"5146a80a-6cba-46a8-85b8-1fb0dd8304cd\") " Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.520062 4991 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.528397 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc" (OuterVolumeSpecName: "kube-api-access-c8mbc") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "kube-api-access-c8mbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.528500 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts" (OuterVolumeSpecName: "scripts") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.529042 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.550152 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.579559 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data" (OuterVolumeSpecName: "config-data") pod "5146a80a-6cba-46a8-85b8-1fb0dd8304cd" (UID: "5146a80a-6cba-46a8-85b8-1fb0dd8304cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.622226 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.622265 4991 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.622277 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.622288 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8mbc\" (UniqueName: \"kubernetes.io/projected/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-kube-api-access-c8mbc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.622303 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5146a80a-6cba-46a8-85b8-1fb0dd8304cd-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.864043 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5lhk" event={"ID":"5146a80a-6cba-46a8-85b8-1fb0dd8304cd","Type":"ContainerDied","Data":"4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7"} Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.864096 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f0e78382f003b5f4997c68db841205dfb83a5ee1ddd8b589af4a37c1ecff2d7" Sep 29 09:59:37 crc kubenswrapper[4991]: I0929 09:59:37.864183 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5lhk" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.162064 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:38 crc kubenswrapper[4991]: E0929 09:59:38.166603 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" containerName="cinder-db-sync" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.166633 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" containerName="cinder-db-sync" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.166984 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" containerName="cinder-db-sync" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.168476 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.172676 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xh5td" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.173553 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.173580 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.173892 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.242938 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6ps\" (UniqueName: \"kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.243618 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.244027 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.250699 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.250979 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.251065 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.265695 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.334377 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.334741 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="dnsmasq-dns" containerID="cri-o://753c35ff32110dc9fa0d06f41fb4ea63c15283cc2cf337a6ab30691076854718" gracePeriod=10 Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.338623 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.353204 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.353271 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.353293 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.353336 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6ps\" (UniqueName: \"kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.358844 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.361585 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.361912 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.362318 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.364224 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.368538 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.370392 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.372422 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.377073 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.396021 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.403151 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6ps\" (UniqueName: \"kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps\") pod \"cinder-scheduler-0\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.466685 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.466761 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.466869 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.467010 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77s8h\" (UniqueName: \"kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.467058 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.467089 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.508045 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.508602 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.510346 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.515828 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.530257 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568479 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568558 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568591 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568620 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568645 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568712 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77s8h\" (UniqueName: \"kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568747 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568784 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568803 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568827 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpgt\" (UniqueName: \"kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568927 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.568995 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.569030 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.570232 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.571216 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.572778 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.576808 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.577061 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.595672 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77s8h\" (UniqueName: \"kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h\") pod \"dnsmasq-dns-6bb4fc677f-sw5l5\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671707 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671816 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671847 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671887 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671906 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.671987 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.672008 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpgt\" (UniqueName: \"kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.672391 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.675443 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.676644 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.681873 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.682344 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.683634 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.698465 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpgt\" (UniqueName: \"kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt\") pod \"cinder-api-0\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.763791 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.778407 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.869307 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-797678c8fc-75zg9" Sep 29 09:59:38 crc kubenswrapper[4991]: I0929 09:59:38.939426 4991 generic.go:334] "Generic (PLEG): container finished" podID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerID="753c35ff32110dc9fa0d06f41fb4ea63c15283cc2cf337a6ab30691076854718" exitCode=0 Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.066851 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.067306 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.067414 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" event={"ID":"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929","Type":"ContainerDied","Data":"753c35ff32110dc9fa0d06f41fb4ea63c15283cc2cf337a6ab30691076854718"} Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.068643 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67bcc55b76-swg9q" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-api" containerID="cri-o://b12a9e171d230ee46aeb063882764b4414a5684f9ea5e4eb91d7bdebd3ae4d71" gracePeriod=30 Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.069244 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67bcc55b76-swg9q" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-httpd" containerID="cri-o://a8d2dd4d6d7e6b9ec5b1a3646d49889e05ae2c59d6a7a85a54a3075d0e3e9986" gracePeriod=30 Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.306894 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.419753 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.419829 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.419878 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.419976 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24fv7\" (UniqueName: \"kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.420180 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.420225 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config\") pod \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\" (UID: \"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929\") " Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.477170 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7" (OuterVolumeSpecName: "kube-api-access-24fv7") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "kube-api-access-24fv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.526251 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24fv7\" (UniqueName: \"kubernetes.io/projected/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-kube-api-access-24fv7\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.548563 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.633643 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.641494 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.657123 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config" (OuterVolumeSpecName: "config") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.657808 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.682040 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" (UID: "30b6f747-71b5-46f4-a4d0-c4e4fd6ff929"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.684586 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.737542 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.737576 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.737604 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.737646 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.958087 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerStarted","Data":"ab315d35a05a12b0bc51eb38b62efaa836af0f0e930c62f072639c25c0802865"} Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.960517 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" event={"ID":"30b6f747-71b5-46f4-a4d0-c4e4fd6ff929","Type":"ContainerDied","Data":"82e3725d7215bd3de7a2216ff0a71cf442eeb1bf76c6a235dc259b899547516e"} Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.960552 4991 scope.go:117] "RemoveContainer" containerID="753c35ff32110dc9fa0d06f41fb4ea63c15283cc2cf337a6ab30691076854718" Sep 29 09:59:39 crc kubenswrapper[4991]: I0929 09:59:39.960683 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-b78vv" Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.029633 4991 scope.go:117] "RemoveContainer" containerID="0487f234d318c995223a0cda835dacfedee78964290a7ef0411746a10c6880db" Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.033531 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.069539 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-b78vv"] Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.132536 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.212505 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.255293 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.975295 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" path="/var/lib/kubelet/pods/30b6f747-71b5-46f4-a4d0-c4e4fd6ff929/volumes" Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.997326 4991 generic.go:334] "Generic (PLEG): container finished" podID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerID="a8d2dd4d6d7e6b9ec5b1a3646d49889e05ae2c59d6a7a85a54a3075d0e3e9986" exitCode=0 Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.997359 4991 generic.go:334] "Generic (PLEG): container finished" podID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerID="b12a9e171d230ee46aeb063882764b4414a5684f9ea5e4eb91d7bdebd3ae4d71" exitCode=0 Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.997401 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerDied","Data":"a8d2dd4d6d7e6b9ec5b1a3646d49889e05ae2c59d6a7a85a54a3075d0e3e9986"} Sep 29 09:59:40 crc kubenswrapper[4991]: I0929 09:59:40.997429 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerDied","Data":"b12a9e171d230ee46aeb063882764b4414a5684f9ea5e4eb91d7bdebd3ae4d71"} Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.015256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerStarted","Data":"1a86825b4f088bab9a4abc6fe54b85cc2316cb8605f05f70147b5e1a840f791f"} Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.038635 4991 generic.go:334] "Generic (PLEG): container finished" podID="0b587587-d964-446d-9354-d1556ce80381" containerID="4bbcd8eebd77311eab1ef37c76c35f5b03d64fb36820869510812e42b1019ef3" exitCode=0 Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.038686 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" event={"ID":"0b587587-d964-446d-9354-d1556ce80381","Type":"ContainerDied","Data":"4bbcd8eebd77311eab1ef37c76c35f5b03d64fb36820869510812e42b1019ef3"} Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.038712 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" event={"ID":"0b587587-d964-446d-9354-d1556ce80381","Type":"ContainerStarted","Data":"ef5a822aa3b68256f3c182225701833ccda6fb5534bacad67ab20ce2fbcb67f7"} Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.315341 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.794774 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.820781 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle\") pod \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.820899 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n2r5\" (UniqueName: \"kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5\") pod \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.821062 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config\") pod \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.821101 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config\") pod \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.821127 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs\") pod \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\" (UID: \"eab57318-1d9e-4b78-afe1-84d1ef4793c3\") " Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.835195 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "eab57318-1d9e-4b78-afe1-84d1ef4793c3" (UID: "eab57318-1d9e-4b78-afe1-84d1ef4793c3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.850195 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5" (OuterVolumeSpecName: "kube-api-access-2n2r5") pod "eab57318-1d9e-4b78-afe1-84d1ef4793c3" (UID: "eab57318-1d9e-4b78-afe1-84d1ef4793c3"). InnerVolumeSpecName "kube-api-access-2n2r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.929540 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n2r5\" (UniqueName: \"kubernetes.io/projected/eab57318-1d9e-4b78-afe1-84d1ef4793c3-kube-api-access-2n2r5\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:41 crc kubenswrapper[4991]: I0929 09:59:41.929571 4991 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-httpd-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.152765 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config" (OuterVolumeSpecName: "config") pod "eab57318-1d9e-4b78-afe1-84d1ef4793c3" (UID: "eab57318-1d9e-4b78-afe1-84d1ef4793c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.160779 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67bcc55b76-swg9q" event={"ID":"eab57318-1d9e-4b78-afe1-84d1ef4793c3","Type":"ContainerDied","Data":"e4bc131ed73bf9acc3e470c0b49cef0b14aae397d1c0a22c40f1f7cd5283a70e"} Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.160828 4991 scope.go:117] "RemoveContainer" containerID="a8d2dd4d6d7e6b9ec5b1a3646d49889e05ae2c59d6a7a85a54a3075d0e3e9986" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.160976 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67bcc55b76-swg9q" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.237101 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eab57318-1d9e-4b78-afe1-84d1ef4793c3" (UID: "eab57318-1d9e-4b78-afe1-84d1ef4793c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.242370 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.242398 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.258774 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" event={"ID":"0b587587-d964-446d-9354-d1556ce80381","Type":"ContainerStarted","Data":"5dd6bcac4175475de3c1a3ffdeca1e534e905ec85e16ffec7592d36b8dee0549"} Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.262092 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.268580 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "eab57318-1d9e-4b78-afe1-84d1ef4793c3" (UID: "eab57318-1d9e-4b78-afe1-84d1ef4793c3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.313883 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" podStartSLOduration=4.313866631 podStartE2EDuration="4.313866631s" podCreationTimestamp="2025-09-29 09:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:42.308483209 +0000 UTC m=+1318.164411237" watchObservedRunningTime="2025-09-29 09:59:42.313866631 +0000 UTC m=+1318.169794659" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.333917 4991 scope.go:117] "RemoveContainer" containerID="b12a9e171d230ee46aeb063882764b4414a5684f9ea5e4eb91d7bdebd3ae4d71" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.358240 4991 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eab57318-1d9e-4b78-afe1-84d1ef4793c3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.557013 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.598359 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67bcc55b76-swg9q"] Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.754836 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:42 crc kubenswrapper[4991]: I0929 09:59:42.948877 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" path="/var/lib/kubelet/pods/eab57318-1d9e-4b78-afe1-84d1ef4793c3/volumes" Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.181362 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7695fcf79b-8n9gp" Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.290656 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.301177 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66596f498-zwm7z" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api-log" containerID="cri-o://b07c212e267167a7a642d9f370dc7fc494a75e86f742de5fa805caa6d7977f4a" gracePeriod=30 Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.301683 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66596f498-zwm7z" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" containerID="cri-o://f9405b39bee07a2d6c6e89f9ddb1adcaba1728bcf10e7120e63262d5b0df5e44" gracePeriod=30 Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.319566 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66596f498-zwm7z" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": EOF" Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.345883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerStarted","Data":"5adbadc93b53425b4d28c14a05ede6f8efb5105c0e5c3b8169d55e554ceb0c25"} Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.346114 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api-log" containerID="cri-o://5adbadc93b53425b4d28c14a05ede6f8efb5105c0e5c3b8169d55e554ceb0c25" gracePeriod=30 Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.346399 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.346895 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api" containerID="cri-o://5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573" gracePeriod=30 Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.356938 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerStarted","Data":"7ae92c36df4be013c334fd95af763497db9b946e7a0633f897745bedc06352ea"} Sep 29 09:59:43 crc kubenswrapper[4991]: I0929 09:59:43.372936 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.372914735 podStartE2EDuration="5.372914735s" podCreationTimestamp="2025-09-29 09:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:43.365646843 +0000 UTC m=+1319.221574881" watchObservedRunningTime="2025-09-29 09:59:43.372914735 +0000 UTC m=+1319.228842763" Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.379770 4991 generic.go:334] "Generic (PLEG): container finished" podID="092ef327-6f29-4765-9f4a-ff9db4679277" containerID="5adbadc93b53425b4d28c14a05ede6f8efb5105c0e5c3b8169d55e554ceb0c25" exitCode=143 Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.379844 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerStarted","Data":"5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573"} Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.380485 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerDied","Data":"5adbadc93b53425b4d28c14a05ede6f8efb5105c0e5c3b8169d55e554ceb0c25"} Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.383284 4991 generic.go:334] "Generic (PLEG): container finished" podID="a13c3d37-d0d3-4465-bd18-432600952dde" containerID="b07c212e267167a7a642d9f370dc7fc494a75e86f742de5fa805caa6d7977f4a" exitCode=143 Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.383352 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerDied","Data":"b07c212e267167a7a642d9f370dc7fc494a75e86f742de5fa805caa6d7977f4a"} Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.388780 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerStarted","Data":"8610ac01442916baa6e2827ccb1f7799666759c26ba3ad9e9f6ee2b4c5059f04"} Sep 29 09:59:44 crc kubenswrapper[4991]: I0929 09:59:44.414151 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.019340558 podStartE2EDuration="6.414126958s" podCreationTimestamp="2025-09-29 09:59:38 +0000 UTC" firstStartedPulling="2025-09-29 09:59:39.68651672 +0000 UTC m=+1315.542444748" lastFinishedPulling="2025-09-29 09:59:41.08130312 +0000 UTC m=+1316.937231148" observedRunningTime="2025-09-29 09:59:44.407408501 +0000 UTC m=+1320.263336529" watchObservedRunningTime="2025-09-29 09:59:44.414126958 +0000 UTC m=+1320.270054996" Sep 29 09:59:47 crc kubenswrapper[4991]: I0929 09:59:47.750062 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66596f498-zwm7z" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:55538->10.217.0.199:9311: read: connection reset by peer" Sep 29 09:59:47 crc kubenswrapper[4991]: I0929 09:59:47.750881 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66596f498-zwm7z" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.199:9311/healthcheck\": read tcp 10.217.0.2:55548->10.217.0.199:9311: read: connection reset by peer" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.441552 4991 generic.go:334] "Generic (PLEG): container finished" podID="a13c3d37-d0d3-4465-bd18-432600952dde" containerID="f9405b39bee07a2d6c6e89f9ddb1adcaba1728bcf10e7120e63262d5b0df5e44" exitCode=0 Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.441751 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerDied","Data":"f9405b39bee07a2d6c6e89f9ddb1adcaba1728bcf10e7120e63262d5b0df5e44"} Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.509687 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.673230 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.751741 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom\") pod \"a13c3d37-d0d3-4465-bd18-432600952dde\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.751825 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle\") pod \"a13c3d37-d0d3-4465-bd18-432600952dde\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.751855 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data\") pod \"a13c3d37-d0d3-4465-bd18-432600952dde\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.751917 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xnhd\" (UniqueName: \"kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd\") pod \"a13c3d37-d0d3-4465-bd18-432600952dde\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.752363 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs\") pod \"a13c3d37-d0d3-4465-bd18-432600952dde\" (UID: \"a13c3d37-d0d3-4465-bd18-432600952dde\") " Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.753503 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs" (OuterVolumeSpecName: "logs") pod "a13c3d37-d0d3-4465-bd18-432600952dde" (UID: "a13c3d37-d0d3-4465-bd18-432600952dde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.762993 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a13c3d37-d0d3-4465-bd18-432600952dde" (UID: "a13c3d37-d0d3-4465-bd18-432600952dde"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.763186 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd" (OuterVolumeSpecName: "kube-api-access-5xnhd") pod "a13c3d37-d0d3-4465-bd18-432600952dde" (UID: "a13c3d37-d0d3-4465-bd18-432600952dde"). InnerVolumeSpecName "kube-api-access-5xnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.766455 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.769662 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.827663 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a13c3d37-d0d3-4465-bd18-432600952dde" (UID: "a13c3d37-d0d3-4465-bd18-432600952dde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.842719 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.844342 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="dnsmasq-dns" containerID="cri-o://b3e321ecddd3ebfecb1243b55e74159d3ae5f86bdf564f1cb485fa91a9cefb96" gracePeriod=10 Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.862226 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.862266 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.862286 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xnhd\" (UniqueName: \"kubernetes.io/projected/a13c3d37-d0d3-4465-bd18-432600952dde-kube-api-access-5xnhd\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.862296 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a13c3d37-d0d3-4465-bd18-432600952dde-logs\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.936338 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data" (OuterVolumeSpecName: "config-data") pod "a13c3d37-d0d3-4465-bd18-432600952dde" (UID: "a13c3d37-d0d3-4465-bd18-432600952dde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:48 crc kubenswrapper[4991]: I0929 09:59:48.964629 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13c3d37-d0d3-4465-bd18-432600952dde-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.474443 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66596f498-zwm7z" event={"ID":"a13c3d37-d0d3-4465-bd18-432600952dde","Type":"ContainerDied","Data":"c1dc2c86b2197a5da50f86d971be10c8b8786842d403aabb7c1d0e696d0c1286"} Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.480028 4991 scope.go:117] "RemoveContainer" containerID="f9405b39bee07a2d6c6e89f9ddb1adcaba1728bcf10e7120e63262d5b0df5e44" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.474592 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66596f498-zwm7z" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.483375 4991 generic.go:334] "Generic (PLEG): container finished" podID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerID="b3e321ecddd3ebfecb1243b55e74159d3ae5f86bdf564f1cb485fa91a9cefb96" exitCode=0 Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.483504 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" event={"ID":"e7db6403-2f05-4d5f-a563-9afc14bbde8e","Type":"ContainerDied","Data":"b3e321ecddd3ebfecb1243b55e74159d3ae5f86bdf564f1cb485fa91a9cefb96"} Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.546130 4991 scope.go:117] "RemoveContainer" containerID="b07c212e267167a7a642d9f370dc7fc494a75e86f742de5fa805caa6d7977f4a" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.573576 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.601117 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.621112 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-66596f498-zwm7z"] Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.725103 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.805659 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.805774 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.805840 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.805899 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.805945 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.806060 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhzcn\" (UniqueName: \"kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn\") pod \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\" (UID: \"e7db6403-2f05-4d5f-a563-9afc14bbde8e\") " Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.821295 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn" (OuterVolumeSpecName: "kube-api-access-nhzcn") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "kube-api-access-nhzcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.869769 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.892983 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.894783 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config" (OuterVolumeSpecName: "config") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.901478 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.910300 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhzcn\" (UniqueName: \"kubernetes.io/projected/e7db6403-2f05-4d5f-a563-9afc14bbde8e-kube-api-access-nhzcn\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.910331 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.910339 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.910348 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.910357 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-config\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:49 crc kubenswrapper[4991]: I0929 09:59:49.919456 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7db6403-2f05-4d5f-a563-9afc14bbde8e" (UID: "e7db6403-2f05-4d5f-a563-9afc14bbde8e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.012421 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7db6403-2f05-4d5f-a563-9afc14bbde8e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.484686 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74c8bd59d-2nxsz" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.516582 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="cinder-scheduler" containerID="cri-o://7ae92c36df4be013c334fd95af763497db9b946e7a0633f897745bedc06352ea" gracePeriod=30 Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.516980 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.517043 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-gwbq8" event={"ID":"e7db6403-2f05-4d5f-a563-9afc14bbde8e","Type":"ContainerDied","Data":"8995a6f3d0dfb25c4d83f76dfdf4caf0d4c1e1ab4afa19b9981914d5da1b7513"} Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.517078 4991 scope.go:117] "RemoveContainer" containerID="b3e321ecddd3ebfecb1243b55e74159d3ae5f86bdf564f1cb485fa91a9cefb96" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.517470 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="probe" containerID="cri-o://8610ac01442916baa6e2827ccb1f7799666759c26ba3ad9e9f6ee2b4c5059f04" gracePeriod=30 Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.550357 4991 scope.go:117] "RemoveContainer" containerID="54ca1fdb917d71fd0c5a1382d0efeb71dd34845ffd5e248afa7cbdd9b9db1ce9" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.578619 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.612258 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-gwbq8"] Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.966085 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" path="/var/lib/kubelet/pods/a13c3d37-d0d3-4465-bd18-432600952dde/volumes" Sep 29 09:59:50 crc kubenswrapper[4991]: I0929 09:59:50.966859 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" path="/var/lib/kubelet/pods/e7db6403-2f05-4d5f-a563-9afc14bbde8e/volumes" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.110937 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.542174 4991 generic.go:334] "Generic (PLEG): container finished" podID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerID="8610ac01442916baa6e2827ccb1f7799666759c26ba3ad9e9f6ee2b4c5059f04" exitCode=0 Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.542475 4991 generic.go:334] "Generic (PLEG): container finished" podID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerID="7ae92c36df4be013c334fd95af763497db9b946e7a0633f897745bedc06352ea" exitCode=0 Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.542247 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerDied","Data":"8610ac01442916baa6e2827ccb1f7799666759c26ba3ad9e9f6ee2b4c5059f04"} Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.542540 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerDied","Data":"7ae92c36df4be013c334fd95af763497db9b946e7a0633f897745bedc06352ea"} Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.829638 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966295 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966371 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966457 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966564 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966587 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.966608 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6ps\" (UniqueName: \"kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.967242 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom\") pod \"c398800a-2dd3-4b01-bbca-f183ace3523a\" (UID: \"c398800a-2dd3-4b01-bbca-f183ace3523a\") " Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.967723 4991 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c398800a-2dd3-4b01-bbca-f183ace3523a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.972730 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.972755 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps" (OuterVolumeSpecName: "kube-api-access-ps6ps") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "kube-api-access-ps6ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 09:59:51 crc kubenswrapper[4991]: I0929 09:59:51.982082 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts" (OuterVolumeSpecName: "scripts") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.024034 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.025122 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-httpd" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.025179 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-httpd" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.042749 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-api" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.042793 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-api" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.042815 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="init" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.042823 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="init" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.042841 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="probe" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.030109 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.042850 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="probe" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.042904 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="cinder-scheduler" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.042913 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="cinder-scheduler" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.042924 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api-log" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.042930 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api-log" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.043058 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043065 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.043076 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043082 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.043094 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043100 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: E0929 09:59:52.043120 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="init" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043127 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="init" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043464 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="cinder-scheduler" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043483 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-api" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043493 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab57318-1d9e-4b78-afe1-84d1ef4793c3" containerName="neutron-httpd" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043505 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043519 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13c3d37-d0d3-4465-bd18-432600952dde" containerName="barbican-api-log" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043526 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" containerName="probe" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043542 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b6f747-71b5-46f4-a4d0-c4e4fd6ff929" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.043559 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7db6403-2f05-4d5f-a563-9afc14bbde8e" containerName="dnsmasq-dns" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.044204 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.044288 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.050425 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5jmcp" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.051183 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.051370 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.087405 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.087438 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.087452 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6ps\" (UniqueName: \"kubernetes.io/projected/c398800a-2dd3-4b01-bbca-f183ace3523a-kube-api-access-ps6ps\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.087464 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.104548 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data" (OuterVolumeSpecName: "config-data") pod "c398800a-2dd3-4b01-bbca-f183ace3523a" (UID: "c398800a-2dd3-4b01-bbca-f183ace3523a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.189325 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgwd\" (UniqueName: \"kubernetes.io/projected/fa48bcb8-1683-4a76-b721-59149bc4e240-kube-api-access-6wgwd\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.189381 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.189410 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.189682 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.189854 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c398800a-2dd3-4b01-bbca-f183ace3523a-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.291410 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgwd\" (UniqueName: \"kubernetes.io/projected/fa48bcb8-1683-4a76-b721-59149bc4e240-kube-api-access-6wgwd\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.291508 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.291545 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.291646 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.292733 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.296040 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.296094 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa48bcb8-1683-4a76-b721-59149bc4e240-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.319052 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgwd\" (UniqueName: \"kubernetes.io/projected/fa48bcb8-1683-4a76-b721-59149bc4e240-kube-api-access-6wgwd\") pod \"openstackclient\" (UID: \"fa48bcb8-1683-4a76-b721-59149bc4e240\") " pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.491754 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.560013 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c398800a-2dd3-4b01-bbca-f183ace3523a","Type":"ContainerDied","Data":"ab315d35a05a12b0bc51eb38b62efaa836af0f0e930c62f072639c25c0802865"} Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.560062 4991 scope.go:117] "RemoveContainer" containerID="8610ac01442916baa6e2827ccb1f7799666759c26ba3ad9e9f6ee2b4c5059f04" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.560168 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.628259 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.652128 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.680256 4991 scope.go:117] "RemoveContainer" containerID="7ae92c36df4be013c334fd95af763497db9b946e7a0633f897745bedc06352ea" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.694535 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.698266 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.702566 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.725601 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.845845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.846294 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xrfl\" (UniqueName: \"kubernetes.io/projected/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-kube-api-access-2xrfl\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.846389 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.846513 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.846579 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.846620 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.946774 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c398800a-2dd3-4b01-bbca-f183ace3523a" path="/var/lib/kubelet/pods/c398800a-2dd3-4b01-bbca-f183ace3523a/volumes" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.950607 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.950759 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.950840 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.950922 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.950957 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xrfl\" (UniqueName: \"kubernetes.io/projected/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-kube-api-access-2xrfl\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.951098 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.951225 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.958571 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.958696 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.959471 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.961600 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:52 crc kubenswrapper[4991]: I0929 09:59:52.976082 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xrfl\" (UniqueName: \"kubernetes.io/projected/e52c4ee5-d723-4579-8ed4-b49d783c3f9f-kube-api-access-2xrfl\") pod \"cinder-scheduler-0\" (UID: \"e52c4ee5-d723-4579-8ed4-b49d783c3f9f\") " pod="openstack/cinder-scheduler-0" Sep 29 09:59:53 crc kubenswrapper[4991]: I0929 09:59:53.032693 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 29 09:59:53 crc kubenswrapper[4991]: I0929 09:59:53.083787 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 29 09:59:53 crc kubenswrapper[4991]: W0929 09:59:53.537152 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode52c4ee5_d723_4579_8ed4_b49d783c3f9f.slice/crio-c60187f4b6e0aac92ccc7deed632c2eb113204f6abc927b24524133feeaf51de WatchSource:0}: Error finding container c60187f4b6e0aac92ccc7deed632c2eb113204f6abc927b24524133feeaf51de: Status 404 returned error can't find the container with id c60187f4b6e0aac92ccc7deed632c2eb113204f6abc927b24524133feeaf51de Sep 29 09:59:53 crc kubenswrapper[4991]: I0929 09:59:53.548685 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 29 09:59:53 crc kubenswrapper[4991]: I0929 09:59:53.575143 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e52c4ee5-d723-4579-8ed4-b49d783c3f9f","Type":"ContainerStarted","Data":"c60187f4b6e0aac92ccc7deed632c2eb113204f6abc927b24524133feeaf51de"} Sep 29 09:59:53 crc kubenswrapper[4991]: I0929 09:59:53.579159 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa48bcb8-1683-4a76-b721-59149bc4e240","Type":"ContainerStarted","Data":"612119f97538e441727f6945d0e3c0622827d55b5adc8b2fe1fd694ec365a21c"} Sep 29 09:59:54 crc kubenswrapper[4991]: I0929 09:59:54.598678 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e52c4ee5-d723-4579-8ed4-b49d783c3f9f","Type":"ContainerStarted","Data":"c7ffe201d6f08ad580f3e3e2e1d3f2b708b5688fc394c960ee0c9d283902898e"} Sep 29 09:59:55 crc kubenswrapper[4991]: I0929 09:59:55.629552 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e52c4ee5-d723-4579-8ed4-b49d783c3f9f","Type":"ContainerStarted","Data":"1b1bd6aaeef7c68225709451700f0946b7f54c442ee3c8d6f8d6bbead5c45090"} Sep 29 09:59:55 crc kubenswrapper[4991]: I0929 09:59:55.665350 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.665331616 podStartE2EDuration="3.665331616s" podCreationTimestamp="2025-09-29 09:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:55.660294863 +0000 UTC m=+1331.516222901" watchObservedRunningTime="2025-09-29 09:59:55.665331616 +0000 UTC m=+1331.521259644" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.334384 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67c9dd4f47-ndrxn"] Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.336695 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.341673 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.342318 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.343065 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.371184 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67c9dd4f47-ndrxn"] Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.442749 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-run-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.442916 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-internal-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443068 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-combined-ca-bundle\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443195 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-public-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443285 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvbq7\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-kube-api-access-kvbq7\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443347 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-log-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443388 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-config-data\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.443449 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-etc-swift\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548160 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-combined-ca-bundle\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548236 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-public-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548281 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvbq7\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-kube-api-access-kvbq7\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548310 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-log-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548330 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-config-data\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548354 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-etc-swift\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548412 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-run-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.548458 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-internal-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.551372 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-log-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.552677 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f4bfc-55ef-4360-905e-df84e5d932b2-run-httpd\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.556737 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-public-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.560596 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-internal-tls-certs\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.566371 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-etc-swift\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.571574 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-combined-ca-bundle\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.576746 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f4bfc-55ef-4360-905e-df84e5d932b2-config-data\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.586719 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvbq7\" (UniqueName: \"kubernetes.io/projected/ce4f4bfc-55ef-4360-905e-df84e5d932b2-kube-api-access-kvbq7\") pod \"swift-proxy-67c9dd4f47-ndrxn\" (UID: \"ce4f4bfc-55ef-4360-905e-df84e5d932b2\") " pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:56 crc kubenswrapper[4991]: I0929 09:59:56.656073 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.211328 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.212001 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-notification-agent" containerID="cri-o://023284e6ebf093ce5f781fa5fda61d38852f5b727420b579a7818e3b1f2bcb5e" gracePeriod=30 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.212275 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" containerID="cri-o://ea11fbcca21e61a86b77427fe0d1802d2448bd7b92114facc6e15537c9b23c2b" gracePeriod=30 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.212349 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="sg-core" containerID="cri-o://4b40a6b965b59dfc13d333c7114fad60012f84b46aed1e9995e8620cda3a04f6" gracePeriod=30 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.212398 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-central-agent" containerID="cri-o://203ef20dc8abb9caaf2cd44cc5f1a5b8e7eeb8f0204d95e557656eab46fc0d17" gracePeriod=30 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.308557 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67c9dd4f47-ndrxn"] Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.321785 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": read tcp 10.217.0.2:58110->10.217.0.184:3000: read: connection reset by peer" Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.667565 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" event={"ID":"ce4f4bfc-55ef-4360-905e-df84e5d932b2","Type":"ContainerStarted","Data":"43e3c25298dfd74b6c89d32f0950208fbddcbc0e5d54aa3bf08858b26b698bdb"} Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.667926 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" event={"ID":"ce4f4bfc-55ef-4360-905e-df84e5d932b2","Type":"ContainerStarted","Data":"f568de6f184ed4aa6e34ac98252f20ee6cfd30ef999b1191c6e5b8cddadc34ca"} Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681524 4991 generic.go:334] "Generic (PLEG): container finished" podID="386f2f3b-cfa3-4376-a958-8905da139c79" containerID="203ef20dc8abb9caaf2cd44cc5f1a5b8e7eeb8f0204d95e557656eab46fc0d17" exitCode=0 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681564 4991 generic.go:334] "Generic (PLEG): container finished" podID="386f2f3b-cfa3-4376-a958-8905da139c79" containerID="ea11fbcca21e61a86b77427fe0d1802d2448bd7b92114facc6e15537c9b23c2b" exitCode=0 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681574 4991 generic.go:334] "Generic (PLEG): container finished" podID="386f2f3b-cfa3-4376-a958-8905da139c79" containerID="4b40a6b965b59dfc13d333c7114fad60012f84b46aed1e9995e8620cda3a04f6" exitCode=2 Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681594 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerDied","Data":"203ef20dc8abb9caaf2cd44cc5f1a5b8e7eeb8f0204d95e557656eab46fc0d17"} Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681623 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerDied","Data":"ea11fbcca21e61a86b77427fe0d1802d2448bd7b92114facc6e15537c9b23c2b"} Sep 29 09:59:57 crc kubenswrapper[4991]: I0929 09:59:57.681637 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerDied","Data":"4b40a6b965b59dfc13d333c7114fad60012f84b46aed1e9995e8620cda3a04f6"} Sep 29 09:59:58 crc kubenswrapper[4991]: I0929 09:59:58.033585 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 29 09:59:58 crc kubenswrapper[4991]: I0929 09:59:58.696897 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" event={"ID":"ce4f4bfc-55ef-4360-905e-df84e5d932b2","Type":"ContainerStarted","Data":"65090254df623bd50bcf5d2fa8792cc5267c9edfd0f460a726e3952527af1b26"} Sep 29 09:59:58 crc kubenswrapper[4991]: I0929 09:59:58.697422 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:58 crc kubenswrapper[4991]: I0929 09:59:58.697436 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 09:59:58 crc kubenswrapper[4991]: I0929 09:59:58.730609 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" podStartSLOduration=2.730581752 podStartE2EDuration="2.730581752s" podCreationTimestamp="2025-09-29 09:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 09:59:58.717892268 +0000 UTC m=+1334.573820306" watchObservedRunningTime="2025-09-29 09:59:58.730581752 +0000 UTC m=+1334.586509780" Sep 29 09:59:59 crc kubenswrapper[4991]: I0929 09:59:59.715722 4991 generic.go:334] "Generic (PLEG): container finished" podID="386f2f3b-cfa3-4376-a958-8905da139c79" containerID="023284e6ebf093ce5f781fa5fda61d38852f5b727420b579a7818e3b1f2bcb5e" exitCode=0 Sep 29 09:59:59 crc kubenswrapper[4991]: I0929 09:59:59.715808 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerDied","Data":"023284e6ebf093ce5f781fa5fda61d38852f5b727420b579a7818e3b1f2bcb5e"} Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.153122 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8"] Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.155036 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.186994 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.187275 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.202853 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8"] Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.347342 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.347656 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggq5\" (UniqueName: \"kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.348039 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.450303 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.451797 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.452014 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggq5\" (UniqueName: \"kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.452881 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.459234 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.485908 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggq5\" (UniqueName: \"kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5\") pod \"collect-profiles-29319000-pdns8\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:00 crc kubenswrapper[4991]: I0929 10:00:00.514381 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:03 crc kubenswrapper[4991]: I0929 10:00:03.339623 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 29 10:00:04 crc kubenswrapper[4991]: I0929 10:00:04.773594 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": dial tcp 10.217.0.184:3000: connect: connection refused" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.220619 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8"] Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.661568 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.666503 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.756578 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.797867 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" event={"ID":"2d041a6c-578e-4634-bf56-333953aebbc5","Type":"ContainerStarted","Data":"d9b09772efb6aa800d2a9ac5b28448c080855a6ff22eaf7db6b9654fb967ba18"} Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.803643 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.803622 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"386f2f3b-cfa3-4376-a958-8905da139c79","Type":"ContainerDied","Data":"417e7a67e311554ad17777199d830735372aec7a3473514bcd94c77fc389dfb9"} Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.803790 4991 scope.go:117] "RemoveContainer" containerID="203ef20dc8abb9caaf2cd44cc5f1a5b8e7eeb8f0204d95e557656eab46fc0d17" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.846208 4991 scope.go:117] "RemoveContainer" containerID="ea11fbcca21e61a86b77427fe0d1802d2448bd7b92114facc6e15537c9b23c2b" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.908719 4991 scope.go:117] "RemoveContainer" containerID="4b40a6b965b59dfc13d333c7114fad60012f84b46aed1e9995e8620cda3a04f6" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924362 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h9k2\" (UniqueName: \"kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924404 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924537 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924615 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924708 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924777 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.924872 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd\") pod \"386f2f3b-cfa3-4376-a958-8905da139c79\" (UID: \"386f2f3b-cfa3-4376-a958-8905da139c79\") " Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.926000 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.926132 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.930752 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2" (OuterVolumeSpecName: "kube-api-access-8h9k2") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "kube-api-access-8h9k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.931294 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts" (OuterVolumeSpecName: "scripts") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.954941 4991 scope.go:117] "RemoveContainer" containerID="023284e6ebf093ce5f781fa5fda61d38852f5b727420b579a7818e3b1f2bcb5e" Sep 29 10:00:06 crc kubenswrapper[4991]: I0929 10:00:06.961265 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.029203 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.029443 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/386f2f3b-cfa3-4376-a958-8905da139c79-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.029873 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h9k2\" (UniqueName: \"kubernetes.io/projected/386f2f3b-cfa3-4376-a958-8905da139c79-kube-api-access-8h9k2\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.029960 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.030053 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.047980 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.063617 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data" (OuterVolumeSpecName: "config-data") pod "386f2f3b-cfa3-4376-a958-8905da139c79" (UID: "386f2f3b-cfa3-4376-a958-8905da139c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.132474 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.132517 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386f2f3b-cfa3-4376-a958-8905da139c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.230164 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.243607 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.253672 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:07 crc kubenswrapper[4991]: E0929 10:00:07.254144 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-notification-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254161 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-notification-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: E0929 10:00:07.254198 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-central-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254206 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-central-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: E0929 10:00:07.254223 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="sg-core" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254229 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="sg-core" Sep 29 10:00:07 crc kubenswrapper[4991]: E0929 10:00:07.254248 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254253 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254483 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="proxy-httpd" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254552 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-notification-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254565 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="ceilometer-central-agent" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.254583 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" containerName="sg-core" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.259979 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.264875 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.265124 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.270996 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.337413 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.337780 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.337872 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.337963 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkk2h\" (UniqueName: \"kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.338112 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.338180 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.338315 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.440043 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.440243 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.440846 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.440996 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.441449 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.441494 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkk2h\" (UniqueName: \"kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.441615 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.441662 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.442186 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.445098 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.445120 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.445123 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.445906 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.460446 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkk2h\" (UniqueName: \"kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h\") pod \"ceilometer-0\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.611015 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.825721 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" event={"ID":"2d041a6c-578e-4634-bf56-333953aebbc5","Type":"ContainerStarted","Data":"8a4acf166fee6ceea35801943b721e2ced81ac04b9c22bb7c85642faf0150855"} Sep 29 10:00:07 crc kubenswrapper[4991]: I0929 10:00:07.846271 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" podStartSLOduration=7.846255273 podStartE2EDuration="7.846255273s" podCreationTimestamp="2025-09-29 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:07.84311007 +0000 UTC m=+1343.699038098" watchObservedRunningTime="2025-09-29 10:00:07.846255273 +0000 UTC m=+1343.702183301" Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.288281 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.850622 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerStarted","Data":"e5093213a399e1c127a0685a82b3005f25e87e0ebeb1eaf035713b1b6b013a37"} Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.854543 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa48bcb8-1683-4a76-b721-59149bc4e240","Type":"ContainerStarted","Data":"9fd67c7c6ea192848279dd8af5239c8bdcc50d7b6d1cef5ed17ab1f55e9c1148"} Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.856988 4991 generic.go:334] "Generic (PLEG): container finished" podID="2d041a6c-578e-4634-bf56-333953aebbc5" containerID="8a4acf166fee6ceea35801943b721e2ced81ac04b9c22bb7c85642faf0150855" exitCode=0 Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.857037 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" event={"ID":"2d041a6c-578e-4634-bf56-333953aebbc5","Type":"ContainerDied","Data":"8a4acf166fee6ceea35801943b721e2ced81ac04b9c22bb7c85642faf0150855"} Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.878650 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.27035299 podStartE2EDuration="17.878630154s" podCreationTimestamp="2025-09-29 09:59:51 +0000 UTC" firstStartedPulling="2025-09-29 09:59:53.102312678 +0000 UTC m=+1328.958240706" lastFinishedPulling="2025-09-29 10:00:07.710589842 +0000 UTC m=+1343.566517870" observedRunningTime="2025-09-29 10:00:08.872805711 +0000 UTC m=+1344.728733749" watchObservedRunningTime="2025-09-29 10:00:08.878630154 +0000 UTC m=+1344.734558182" Sep 29 10:00:08 crc kubenswrapper[4991]: I0929 10:00:08.966760 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386f2f3b-cfa3-4376-a958-8905da139c79" path="/var/lib/kubelet/pods/386f2f3b-cfa3-4376-a958-8905da139c79/volumes" Sep 29 10:00:09 crc kubenswrapper[4991]: I0929 10:00:09.008599 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:09 crc kubenswrapper[4991]: I0929 10:00:09.869037 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerStarted","Data":"2a438aca7d0c8353e09b072bbbdeb7111bd7ff614fb5c1b1311f5b311eb99837"} Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.610479 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.719067 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume\") pod \"2d041a6c-578e-4634-bf56-333953aebbc5\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.719369 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume\") pod \"2d041a6c-578e-4634-bf56-333953aebbc5\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.719410 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kggq5\" (UniqueName: \"kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5\") pod \"2d041a6c-578e-4634-bf56-333953aebbc5\" (UID: \"2d041a6c-578e-4634-bf56-333953aebbc5\") " Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.720001 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d041a6c-578e-4634-bf56-333953aebbc5" (UID: "2d041a6c-578e-4634-bf56-333953aebbc5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.720391 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d041a6c-578e-4634-bf56-333953aebbc5-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.737743 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5" (OuterVolumeSpecName: "kube-api-access-kggq5") pod "2d041a6c-578e-4634-bf56-333953aebbc5" (UID: "2d041a6c-578e-4634-bf56-333953aebbc5"). InnerVolumeSpecName "kube-api-access-kggq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.737853 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d041a6c-578e-4634-bf56-333953aebbc5" (UID: "2d041a6c-578e-4634-bf56-333953aebbc5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.823144 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d041a6c-578e-4634-bf56-333953aebbc5-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.823177 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kggq5\" (UniqueName: \"kubernetes.io/projected/2d041a6c-578e-4634-bf56-333953aebbc5-kube-api-access-kggq5\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.884450 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" event={"ID":"2d041a6c-578e-4634-bf56-333953aebbc5","Type":"ContainerDied","Data":"d9b09772efb6aa800d2a9ac5b28448c080855a6ff22eaf7db6b9654fb967ba18"} Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.884486 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b09772efb6aa800d2a9ac5b28448c080855a6ff22eaf7db6b9654fb967ba18" Sep 29 10:00:10 crc kubenswrapper[4991]: I0929 10:00:10.884548 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8" Sep 29 10:00:11 crc kubenswrapper[4991]: I0929 10:00:11.771609 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:11 crc kubenswrapper[4991]: I0929 10:00:11.772469 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-log" containerID="cri-o://426b586cfc07b8c9136f4451cc76e43f50158eccfbe1d6d84c9b989b2bf35d9e" gracePeriod=30 Sep 29 10:00:11 crc kubenswrapper[4991]: I0929 10:00:11.772579 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-httpd" containerID="cri-o://de644f415be6e2fb056f4be7401f56c30acc7ca8a89500cac1ae3ce86a10d074" gracePeriod=30 Sep 29 10:00:11 crc kubenswrapper[4991]: I0929 10:00:11.896396 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerStarted","Data":"238a97edf0d76eb919e5cad62b00f9fb2986252a265cc8d35242aec37907e036"} Sep 29 10:00:12 crc kubenswrapper[4991]: I0929 10:00:12.910035 4991 generic.go:334] "Generic (PLEG): container finished" podID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerID="426b586cfc07b8c9136f4451cc76e43f50158eccfbe1d6d84c9b989b2bf35d9e" exitCode=143 Sep 29 10:00:12 crc kubenswrapper[4991]: I0929 10:00:12.910142 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerDied","Data":"426b586cfc07b8c9136f4451cc76e43f50158eccfbe1d6d84c9b989b2bf35d9e"} Sep 29 10:00:13 crc kubenswrapper[4991]: I0929 10:00:13.779663 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.203:8776/healthcheck\": dial tcp 10.217.0.203:8776: connect: connection refused" Sep 29 10:00:13 crc kubenswrapper[4991]: I0929 10:00:13.922444 4991 generic.go:334] "Generic (PLEG): container finished" podID="092ef327-6f29-4765-9f4a-ff9db4679277" containerID="5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573" exitCode=137 Sep 29 10:00:13 crc kubenswrapper[4991]: I0929 10:00:13.922483 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerDied","Data":"5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573"} Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.026025 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.026594 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-log" containerID="cri-o://66936dbde83cf16544935f7f9c802f5ebc85010da4e4c10a9b12986eaa3a427e" gracePeriod=30 Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.028343 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-httpd" containerID="cri-o://4c45554525b032e47ddecb76aa78409794fe183d471cbe5b75911417750365f3" gracePeriod=30 Sep 29 10:00:14 crc kubenswrapper[4991]: E0929 10:00:14.059934 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod092ef327_6f29_4765_9f4a_ff9db4679277.slice/crio-conmon-5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.816627 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.940442 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.947432 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"092ef327-6f29-4765-9f4a-ff9db4679277","Type":"ContainerDied","Data":"1a86825b4f088bab9a4abc6fe54b85cc2316cb8605f05f70147b5e1a840f791f"} Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.947485 4991 scope.go:117] "RemoveContainer" containerID="5e716fb88431b8386e9e3dd7ce797ac57cd68e424690fb458e46de1bb09a8573" Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.952434 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerStarted","Data":"0af1d03d25c6e180c1dce2d2f8bf8b8aa37fbeb37048141a589381bc8230687c"} Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.964785 4991 generic.go:334] "Generic (PLEG): container finished" podID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerID="66936dbde83cf16544935f7f9c802f5ebc85010da4e4c10a9b12986eaa3a427e" exitCode=143 Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.965038 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerDied","Data":"66936dbde83cf16544935f7f9c802f5ebc85010da4e4c10a9b12986eaa3a427e"} Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984316 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984365 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpgt\" (UniqueName: \"kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984444 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984468 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984531 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984630 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.984745 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data\") pod \"092ef327-6f29-4765-9f4a-ff9db4679277\" (UID: \"092ef327-6f29-4765-9f4a-ff9db4679277\") " Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.985554 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs" (OuterVolumeSpecName: "logs") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:14 crc kubenswrapper[4991]: I0929 10:00:14.985726 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.019230 4991 scope.go:117] "RemoveContainer" containerID="5adbadc93b53425b4d28c14a05ede6f8efb5105c0e5c3b8169d55e554ceb0c25" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.027510 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts" (OuterVolumeSpecName: "scripts") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.027608 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt" (OuterVolumeSpecName: "kube-api-access-7dpgt") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "kube-api-access-7dpgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.031648 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.088563 4991 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/092ef327-6f29-4765-9f4a-ff9db4679277-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.088599 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.088609 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpgt\" (UniqueName: \"kubernetes.io/projected/092ef327-6f29-4765-9f4a-ff9db4679277-kube-api-access-7dpgt\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.088620 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092ef327-6f29-4765-9f4a-ff9db4679277-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.088632 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.119179 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.155604 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data" (OuterVolumeSpecName: "config-data") pod "092ef327-6f29-4765-9f4a-ff9db4679277" (UID: "092ef327-6f29-4765-9f4a-ff9db4679277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.190958 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.191000 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092ef327-6f29-4765-9f4a-ff9db4679277-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.228186 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:39108->10.217.0.193:9292: read: connection reset by peer" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.228781 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:39106->10.217.0.193:9292: read: connection reset by peer" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.335048 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.351511 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.369502 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 29 10:00:15 crc kubenswrapper[4991]: E0929 10:00:15.369985 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d041a6c-578e-4634-bf56-333953aebbc5" containerName="collect-profiles" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370004 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d041a6c-578e-4634-bf56-333953aebbc5" containerName="collect-profiles" Sep 29 10:00:15 crc kubenswrapper[4991]: E0929 10:00:15.370054 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api-log" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370063 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api-log" Sep 29 10:00:15 crc kubenswrapper[4991]: E0929 10:00:15.370075 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370082 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370267 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api-log" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370285 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d041a6c-578e-4634-bf56-333953aebbc5" containerName="collect-profiles" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.370296 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" containerName="cinder-api" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.371351 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.374013 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.374196 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.376341 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.389593 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.498488 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.498800 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.498914 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e010d53d-2afc-49b5-ad9a-09054a8855a1-logs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.498941 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-scripts\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.499451 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e010d53d-2afc-49b5-ad9a-09054a8855a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.499708 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.499839 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pt7\" (UniqueName: \"kubernetes.io/projected/e010d53d-2afc-49b5-ad9a-09054a8855a1-kube-api-access-q4pt7\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.499907 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.500129 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601633 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e010d53d-2afc-49b5-ad9a-09054a8855a1-logs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601705 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-scripts\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601746 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e010d53d-2afc-49b5-ad9a-09054a8855a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601804 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601851 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pt7\" (UniqueName: \"kubernetes.io/projected/e010d53d-2afc-49b5-ad9a-09054a8855a1-kube-api-access-q4pt7\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601889 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601990 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.601995 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e010d53d-2afc-49b5-ad9a-09054a8855a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.602062 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.602113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.602353 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e010d53d-2afc-49b5-ad9a-09054a8855a1-logs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.623027 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.627469 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.640755 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.658413 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pt7\" (UniqueName: \"kubernetes.io/projected/e010d53d-2afc-49b5-ad9a-09054a8855a1-kube-api-access-q4pt7\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.668687 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.669723 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.680380 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e010d53d-2afc-49b5-ad9a-09054a8855a1-scripts\") pod \"cinder-api-0\" (UID: \"e010d53d-2afc-49b5-ad9a-09054a8855a1\") " pod="openstack/cinder-api-0" Sep 29 10:00:15 crc kubenswrapper[4991]: I0929 10:00:15.692792 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.036931 4991 generic.go:334] "Generic (PLEG): container finished" podID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerID="de644f415be6e2fb056f4be7401f56c30acc7ca8a89500cac1ae3ce86a10d074" exitCode=0 Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.037012 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerDied","Data":"de644f415be6e2fb056f4be7401f56c30acc7ca8a89500cac1ae3ce86a10d074"} Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.200030 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322040 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qq78\" (UniqueName: \"kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322102 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322159 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322197 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322218 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322290 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322378 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.322421 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs\") pod \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\" (UID: \"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf\") " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.325508 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs" (OuterVolumeSpecName: "logs") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.326014 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.331369 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts" (OuterVolumeSpecName: "scripts") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.331507 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.335965 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78" (OuterVolumeSpecName: "kube-api-access-9qq78") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "kube-api-access-9qq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.358698 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.424103 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data" (OuterVolumeSpecName: "config-data") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425443 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qq78\" (UniqueName: \"kubernetes.io/projected/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-kube-api-access-9qq78\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425460 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425470 4991 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425478 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425486 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425505 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.425514 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.436849 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" (UID: "31b1dc43-5df1-49fb-aeeb-3f31a64f95bf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.456164 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.460173 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.492751 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:16 crc kubenswrapper[4991]: E0929 10:00:16.493433 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-httpd" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.493458 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-httpd" Sep 29 10:00:16 crc kubenswrapper[4991]: E0929 10:00:16.493527 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-log" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.493537 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-log" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.493792 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-log" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.493840 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" containerName="glance-httpd" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.494885 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.497775 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.497978 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.498102 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dzzc5" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.512120 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.529387 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.529426 4991 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.618010 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.619538 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.625070 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.631407 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.631473 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz44d\" (UniqueName: \"kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.633914 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.633991 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.634903 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.657386 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.667795 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.691797 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.740489 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.740834 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4hg\" (UniqueName: \"kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.740861 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.740930 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741020 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz44d\" (UniqueName: \"kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741051 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdswz\" (UniqueName: \"kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741076 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741190 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741223 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741435 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741484 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741520 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.741542 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.747538 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.747637 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.750403 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.798204 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz44d\" (UniqueName: \"kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d\") pod \"heat-engine-6cf6b855c9-q9s4l\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.799594 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.801289 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.807616 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.823435 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843432 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843483 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4hg\" (UniqueName: \"kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843587 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdswz\" (UniqueName: \"kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843626 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843712 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843753 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843781 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843806 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.843821 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.845051 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.846041 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.846621 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.847926 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.850204 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.860075 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.861740 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.869134 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.869603 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4hg\" (UniqueName: \"kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg\") pod \"heat-cfnapi-7dd8c47754-fht6g\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.875177 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdswz\" (UniqueName: \"kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz\") pod \"dnsmasq-dns-7d978555f9-ssx5j\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.882730 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.946522 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.946750 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrvx\" (UniqueName: \"kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.946984 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.947055 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.949092 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.995081 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:16 crc kubenswrapper[4991]: I0929 10:00:16.996986 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092ef327-6f29-4765-9f4a-ff9db4679277" path="/var/lib/kubelet/pods/092ef327-6f29-4765-9f4a-ff9db4679277/volumes" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.049527 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.049602 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.049686 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.049812 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrvx\" (UniqueName: \"kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.056332 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.057363 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.057534 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.065377 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e010d53d-2afc-49b5-ad9a-09054a8855a1","Type":"ContainerStarted","Data":"8da3fc196a457411ebc7c1d53826aebdfc4687dc26fa9d0622afc91bfa3aae22"} Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.076233 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31b1dc43-5df1-49fb-aeeb-3f31a64f95bf","Type":"ContainerDied","Data":"ad0bd582fc88427d73f9c699a3b4f7b9889b5f3cd01326c86b198c91e6e5b99d"} Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.076285 4991 scope.go:117] "RemoveContainer" containerID="de644f415be6e2fb056f4be7401f56c30acc7ca8a89500cac1ae3ce86a10d074" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.076328 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.091722 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrvx\" (UniqueName: \"kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx\") pod \"heat-api-85d44b87d9-sb6h8\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.125131 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.148526 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.200075 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.222704 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.222861 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.230786 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.231919 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282090 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282186 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282336 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgdc\" (UniqueName: \"kubernetes.io/projected/f5dbbf03-2962-4e7c-9e79-516a9222cabc-kube-api-access-2sgdc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282376 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282456 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282585 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282642 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.282674 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-logs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.316332 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.379407 4991 scope.go:117] "RemoveContainer" containerID="426b586cfc07b8c9136f4451cc76e43f50158eccfbe1d6d84c9b989b2bf35d9e" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390426 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390545 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390573 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgdc\" (UniqueName: \"kubernetes.io/projected/f5dbbf03-2962-4e7c-9e79-516a9222cabc-kube-api-access-2sgdc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390620 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390689 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390720 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-logs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390734 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.390835 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.398405 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.400641 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.400975 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.401013 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.401194 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5dbbf03-2962-4e7c-9e79-516a9222cabc-logs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.424622 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.427021 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5dbbf03-2962-4e7c-9e79-516a9222cabc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.449889 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgdc\" (UniqueName: \"kubernetes.io/projected/f5dbbf03-2962-4e7c-9e79-516a9222cabc-kube-api-access-2sgdc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.511201 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"f5dbbf03-2962-4e7c-9e79-516a9222cabc\") " pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.611331 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 29 10:00:17 crc kubenswrapper[4991]: I0929 10:00:17.783101 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.083267 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.102937 4991 generic.go:334] "Generic (PLEG): container finished" podID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerID="4c45554525b032e47ddecb76aa78409794fe183d471cbe5b75911417750365f3" exitCode=0 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.103049 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerDied","Data":"4c45554525b032e47ddecb76aa78409794fe183d471cbe5b75911417750365f3"} Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.104843 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b855c9-q9s4l" event={"ID":"e0901bda-8477-431d-b7a4-6c78208f5f13","Type":"ContainerStarted","Data":"88858ae002ca8c4305b30b90105b1dcb27c0c9473b3cfa3c2eb3db27e1197a1b"} Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107048 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerStarted","Data":"1ea7c1bc13a12ae88ed91f94d1305e9805b0050378945c84caf52dd76e0bf73e"} Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107207 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-central-agent" containerID="cri-o://2a438aca7d0c8353e09b072bbbdeb7111bd7ff614fb5c1b1311f5b311eb99837" gracePeriod=30 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107278 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="sg-core" containerID="cri-o://0af1d03d25c6e180c1dce2d2f8bf8b8aa37fbeb37048141a589381bc8230687c" gracePeriod=30 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107248 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="proxy-httpd" containerID="cri-o://1ea7c1bc13a12ae88ed91f94d1305e9805b0050378945c84caf52dd76e0bf73e" gracePeriod=30 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107345 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-notification-agent" containerID="cri-o://238a97edf0d76eb919e5cad62b00f9fb2986252a265cc8d35242aec37907e036" gracePeriod=30 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.107612 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:00:18 crc kubenswrapper[4991]: W0929 10:00:18.118159 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1426bfad_c3dc_4e90_b43b_7720410b6fbf.slice/crio-7855ad387de8fbc3e1c30f53ce86a31d6554ab500dcc2ef6aa83c01895845d36 WatchSource:0}: Error finding container 7855ad387de8fbc3e1c30f53ce86a31d6554ab500dcc2ef6aa83c01895845d36: Status 404 returned error can't find the container with id 7855ad387de8fbc3e1c30f53ce86a31d6554ab500dcc2ef6aa83c01895845d36 Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.143491 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4919826179999998 podStartE2EDuration="11.143470061s" podCreationTimestamp="2025-09-29 10:00:07 +0000 UTC" firstStartedPulling="2025-09-29 10:00:08.29716696 +0000 UTC m=+1344.153094988" lastFinishedPulling="2025-09-29 10:00:16.948654403 +0000 UTC m=+1352.804582431" observedRunningTime="2025-09-29 10:00:18.139785074 +0000 UTC m=+1353.995713102" watchObservedRunningTime="2025-09-29 10:00:18.143470061 +0000 UTC m=+1353.999398089" Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.768413 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:18 crc kubenswrapper[4991]: I0929 10:00:18.796495 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:00:18 crc kubenswrapper[4991]: W0929 10:00:18.880008 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5d6f6e0_b63a_4a01_a7ce_cc746cea63e5.slice/crio-5d11a813caa84e69db8b24be6b3a36dd7fe116b6d5550be7d9436014576f2ce9 WatchSource:0}: Error finding container 5d11a813caa84e69db8b24be6b3a36dd7fe116b6d5550be7d9436014576f2ce9: Status 404 returned error can't find the container with id 5d11a813caa84e69db8b24be6b3a36dd7fe116b6d5550be7d9436014576f2ce9 Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:18.993923 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b1dc43-5df1-49fb-aeeb-3f31a64f95bf" path="/var/lib/kubelet/pods/31b1dc43-5df1-49fb-aeeb-3f31a64f95bf/volumes" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.017696 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290194 4991 generic.go:334] "Generic (PLEG): container finished" podID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerID="1ea7c1bc13a12ae88ed91f94d1305e9805b0050378945c84caf52dd76e0bf73e" exitCode=0 Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290239 4991 generic.go:334] "Generic (PLEG): container finished" podID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerID="0af1d03d25c6e180c1dce2d2f8bf8b8aa37fbeb37048141a589381bc8230687c" exitCode=2 Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290252 4991 generic.go:334] "Generic (PLEG): container finished" podID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerID="238a97edf0d76eb919e5cad62b00f9fb2986252a265cc8d35242aec37907e036" exitCode=0 Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290334 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerDied","Data":"1ea7c1bc13a12ae88ed91f94d1305e9805b0050378945c84caf52dd76e0bf73e"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290366 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerDied","Data":"0af1d03d25c6e180c1dce2d2f8bf8b8aa37fbeb37048141a589381bc8230687c"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.290378 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerDied","Data":"238a97edf0d76eb919e5cad62b00f9fb2986252a265cc8d35242aec37907e036"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.340469 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" event={"ID":"650279b3-4add-4845-b440-bde55c40efb7","Type":"ContainerStarted","Data":"354a0ca2f087e75ea43e149926067806e8402b62197ec2c5f1865adbd7119092"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.355777 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85d44b87d9-sb6h8" event={"ID":"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5","Type":"ContainerStarted","Data":"5d11a813caa84e69db8b24be6b3a36dd7fe116b6d5550be7d9436014576f2ce9"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.374004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e010d53d-2afc-49b5-ad9a-09054a8855a1","Type":"ContainerStarted","Data":"e85faf40eddc675f3c8b40c41574ab6286fc1f79644970f0b0b41ff708d81bf9"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.376717 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.378448 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" event={"ID":"1426bfad-c3dc-4e90-b43b-7720410b6fbf","Type":"ContainerStarted","Data":"7855ad387de8fbc3e1c30f53ce86a31d6554ab500dcc2ef6aa83c01895845d36"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.385472 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5dbbf03-2962-4e7c-9e79-516a9222cabc","Type":"ContainerStarted","Data":"174cfc42fd0c9ef7fa32cd05fab0f5d1d2459a18cacffb2f0c5420d136912354"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.391245 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b855c9-q9s4l" event={"ID":"e0901bda-8477-431d-b7a4-6c78208f5f13","Type":"ContainerStarted","Data":"e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6"} Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.392764 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.453899 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6cf6b855c9-q9s4l" podStartSLOduration=3.453881201 podStartE2EDuration="3.453881201s" podCreationTimestamp="2025-09-29 10:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:19.450254185 +0000 UTC m=+1355.306182213" watchObservedRunningTime="2025-09-29 10:00:19.453881201 +0000 UTC m=+1355.309809229" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.455733 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456111 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456156 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456203 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456246 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456297 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp7kf\" (UniqueName: \"kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456327 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.456365 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs\") pod \"75e87bd6-4abf-47cb-81f1-8e691471aa97\" (UID: \"75e87bd6-4abf-47cb-81f1-8e691471aa97\") " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.478727 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs" (OuterVolumeSpecName: "logs") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.479006 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.486258 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.489156 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf" (OuterVolumeSpecName: "kube-api-access-zp7kf") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "kube-api-access-zp7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.497216 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts" (OuterVolumeSpecName: "scripts") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.561588 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp7kf\" (UniqueName: \"kubernetes.io/projected/75e87bd6-4abf-47cb-81f1-8e691471aa97-kube-api-access-zp7kf\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.561620 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.561631 4991 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75e87bd6-4abf-47cb-81f1-8e691471aa97-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.561641 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.561663 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.616614 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data" (OuterVolumeSpecName: "config-data") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.645562 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.651155 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75e87bd6-4abf-47cb-81f1-8e691471aa97" (UID: "75e87bd6-4abf-47cb-81f1-8e691471aa97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.662487 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.663468 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.663495 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.663506 4991 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:19 crc kubenswrapper[4991]: I0929 10:00:19.663516 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75e87bd6-4abf-47cb-81f1-8e691471aa97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.451773 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"75e87bd6-4abf-47cb-81f1-8e691471aa97","Type":"ContainerDied","Data":"fab0189d285af21120d8c57ac2070f133cd88d399fda0aa96fa4a36459fbdc16"} Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.452035 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.461672 4991 scope.go:117] "RemoveContainer" containerID="4c45554525b032e47ddecb76aa78409794fe183d471cbe5b75911417750365f3" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.483054 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e010d53d-2afc-49b5-ad9a-09054a8855a1","Type":"ContainerStarted","Data":"db326e0b5fa89378ac823cc8137a792d1c26335bd0c4c4c7aa3b8ce3fee7fc4b"} Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.483099 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.490778 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5dbbf03-2962-4e7c-9e79-516a9222cabc","Type":"ContainerStarted","Data":"e5f850854c5fedb5ad65489fb89353f44b7546dd5fbae0d53442434b51f8955d"} Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.492473 4991 generic.go:334] "Generic (PLEG): container finished" podID="650279b3-4add-4845-b440-bde55c40efb7" containerID="9955714aeb643fff4485e8cef89bcdcd6fe095b0e49eaa737387657db4a5ba7c" exitCode=0 Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.492520 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" event={"ID":"650279b3-4add-4845-b440-bde55c40efb7","Type":"ContainerDied","Data":"9955714aeb643fff4485e8cef89bcdcd6fe095b0e49eaa737387657db4a5ba7c"} Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.533543 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.533525056 podStartE2EDuration="5.533525056s" podCreationTimestamp="2025-09-29 10:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:20.520172454 +0000 UTC m=+1356.376100482" watchObservedRunningTime="2025-09-29 10:00:20.533525056 +0000 UTC m=+1356.389453084" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.586121 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.614035 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.624971 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:20 crc kubenswrapper[4991]: E0929 10:00:20.625578 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-log" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.625600 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-log" Sep 29 10:00:20 crc kubenswrapper[4991]: E0929 10:00:20.625630 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-httpd" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.625638 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-httpd" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.625892 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-log" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.625909 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" containerName="glance-httpd" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.627452 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.632484 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.632908 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.635380 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698084 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv8n\" (UniqueName: \"kubernetes.io/projected/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-kube-api-access-qjv8n\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698147 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698283 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698347 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698423 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698446 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698508 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.698582 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-logs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800357 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjv8n\" (UniqueName: \"kubernetes.io/projected/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-kube-api-access-qjv8n\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800531 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800571 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800625 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800647 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800685 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.800742 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-logs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.801463 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-logs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.802459 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.802481 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.811556 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.824570 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.835118 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.835127 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.847785 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjv8n\" (UniqueName: \"kubernetes.io/projected/2c1d0fd8-5e0c-4dde-b023-b288e5fc6318-kube-api-access-qjv8n\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.867677 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318\") " pod="openstack/glance-default-internal-api-0" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.941835 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e87bd6-4abf-47cb-81f1-8e691471aa97" path="/var/lib/kubelet/pods/75e87bd6-4abf-47cb-81f1-8e691471aa97/volumes" Sep 29 10:00:20 crc kubenswrapper[4991]: I0929 10:00:20.985658 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:21 crc kubenswrapper[4991]: I0929 10:00:21.508429 4991 generic.go:334] "Generic (PLEG): container finished" podID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerID="2a438aca7d0c8353e09b072bbbdeb7111bd7ff614fb5c1b1311f5b311eb99837" exitCode=0 Sep 29 10:00:21 crc kubenswrapper[4991]: I0929 10:00:21.509388 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerDied","Data":"2a438aca7d0c8353e09b072bbbdeb7111bd7ff614fb5c1b1311f5b311eb99837"} Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.279706 4991 scope.go:117] "RemoveContainer" containerID="66936dbde83cf16544935f7f9c802f5ebc85010da4e4c10a9b12986eaa3a427e" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.730500 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.733311 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.774683 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.799668 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.801307 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.838293 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.847729 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.847842 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.847909 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566jz\" (UniqueName: \"kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.848003 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.857075 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.858718 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:22 crc kubenswrapper[4991]: I0929 10:00:22.880886 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.002598 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.002845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003170 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566jz\" (UniqueName: \"kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003314 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003362 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7dq7\" (UniqueName: \"kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003428 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003488 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003637 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.003696 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.006108 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.009910 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntm9\" (UniqueName: \"kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.009991 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.014427 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.018745 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.054685 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.063808 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566jz\" (UniqueName: \"kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.067339 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle\") pod \"heat-engine-6f5fb95b6d-nzndm\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.104773 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113265 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113343 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113368 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7dq7\" (UniqueName: \"kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113396 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113424 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113509 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113563 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntm9\" (UniqueName: \"kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.113594 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.118496 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.119004 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.124679 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.129173 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.131918 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.134994 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7dq7\" (UniqueName: \"kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7\") pod \"heat-cfnapi-66948fffc6-z52jl\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.146326 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntm9\" (UniqueName: \"kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.147810 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom\") pod \"heat-api-f95b7cbf8-2z94s\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.213157 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.215702 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.215764 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.215808 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.215885 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.216116 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.216153 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkk2h\" (UniqueName: \"kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.216180 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd\") pod \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\" (UID: \"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56\") " Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.217448 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.218246 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.223464 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h" (OuterVolumeSpecName: "kube-api-access-fkk2h") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "kube-api-access-fkk2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.224492 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts" (OuterVolumeSpecName: "scripts") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.270522 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.318634 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkk2h\" (UniqueName: \"kubernetes.io/projected/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-kube-api-access-fkk2h\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.318855 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.318915 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.319015 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.346400 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.379063 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.422419 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.541205 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.604560 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data" (OuterVolumeSpecName: "config-data") pod "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" (UID: "74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.643026 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.643058 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.658685 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85d44b87d9-sb6h8" event={"ID":"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5","Type":"ContainerStarted","Data":"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73"} Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.658745 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.678877 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" event={"ID":"1426bfad-c3dc-4e90-b43b-7720410b6fbf","Type":"ContainerStarted","Data":"8eb6aa3af545ffc234f76dda2f9fffb196f0aa9c201c710903838a7cf54d8680"} Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.679393 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.690119 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-85d44b87d9-sb6h8" podStartSLOduration=4.150009754 podStartE2EDuration="7.690101668s" podCreationTimestamp="2025-09-29 10:00:16 +0000 UTC" firstStartedPulling="2025-09-29 10:00:18.902036346 +0000 UTC m=+1354.757964374" lastFinishedPulling="2025-09-29 10:00:22.44212826 +0000 UTC m=+1358.298056288" observedRunningTime="2025-09-29 10:00:23.675904755 +0000 UTC m=+1359.531832783" watchObservedRunningTime="2025-09-29 10:00:23.690101668 +0000 UTC m=+1359.546029696" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.717153 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56","Type":"ContainerDied","Data":"e5093213a399e1c127a0685a82b3005f25e87e0ebeb1eaf035713b1b6b013a37"} Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.717361 4991 scope.go:117] "RemoveContainer" containerID="1ea7c1bc13a12ae88ed91f94d1305e9805b0050378945c84caf52dd76e0bf73e" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.717303 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.721515 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" event={"ID":"650279b3-4add-4845-b440-bde55c40efb7","Type":"ContainerStarted","Data":"06e50afa93d73c3d2c825326d8901994bf8885edfa459c9a32c5fbb8e47cbc84"} Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.721735 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.722345 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" podStartSLOduration=3.428076361 podStartE2EDuration="7.722329384s" podCreationTimestamp="2025-09-29 10:00:16 +0000 UTC" firstStartedPulling="2025-09-29 10:00:18.147878637 +0000 UTC m=+1354.003806665" lastFinishedPulling="2025-09-29 10:00:22.44213166 +0000 UTC m=+1358.298059688" observedRunningTime="2025-09-29 10:00:23.69701706 +0000 UTC m=+1359.552945098" watchObservedRunningTime="2025-09-29 10:00:23.722329384 +0000 UTC m=+1359.578257412" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.734294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318","Type":"ContainerStarted","Data":"f93d0bcbb0eadd4b0ead8ac9129301ff7ddf9d07c510937ed329aa4a212968bb"} Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.767886 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" podStartSLOduration=7.767869181 podStartE2EDuration="7.767869181s" podCreationTimestamp="2025-09-29 10:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:23.764633336 +0000 UTC m=+1359.620561364" watchObservedRunningTime="2025-09-29 10:00:23.767869181 +0000 UTC m=+1359.623797209" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.791512 4991 scope.go:117] "RemoveContainer" containerID="0af1d03d25c6e180c1dce2d2f8bf8b8aa37fbeb37048141a589381bc8230687c" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.834994 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.880996 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.903317 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:23 crc kubenswrapper[4991]: E0929 10:00:23.903834 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-notification-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.903849 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-notification-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: E0929 10:00:23.903879 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-central-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.903888 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-central-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: E0929 10:00:23.903910 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="proxy-httpd" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.903918 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="proxy-httpd" Sep 29 10:00:23 crc kubenswrapper[4991]: E0929 10:00:23.903943 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="sg-core" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.903966 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="sg-core" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.904277 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="proxy-httpd" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.904302 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="sg-core" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.904313 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-notification-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.904341 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" containerName="ceilometer-central-agent" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.906708 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.917234 4991 scope.go:117] "RemoveContainer" containerID="238a97edf0d76eb919e5cad62b00f9fb2986252a265cc8d35242aec37907e036" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.917659 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.918365 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:00:23 crc kubenswrapper[4991]: I0929 10:00:23.924615 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.008220 4991 scope.go:117] "RemoveContainer" containerID="2a438aca7d0c8353e09b072bbbdeb7111bd7ff614fb5c1b1311f5b311eb99837" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055204 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055292 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2n27\" (UniqueName: \"kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055331 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055391 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055419 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055441 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.055485 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158019 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158122 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158157 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158183 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158239 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.158882 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.159620 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.160373 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.160550 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2n27\" (UniqueName: \"kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.166508 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.167526 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.172764 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.173464 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.174142 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.187832 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2n27\" (UniqueName: \"kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27\") pod \"ceilometer-0\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.263461 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.378334 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.547449 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.759468 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66948fffc6-z52jl" event={"ID":"b10260c4-1d6b-4b35-a1f2-40e0b034a659","Type":"ContainerStarted","Data":"618ff27a1fac1e0b59b8e29cfc434bfd1fb7bca89235d600f9d14d981c824f9f"} Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.769564 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5fb95b6d-nzndm" event={"ID":"9552748a-88cb-46f0-8fe3-545d50c9204c","Type":"ContainerStarted","Data":"e970a14b0beb3f46aef04a502600323e69d0fdc0427d7ad4d61515460a98c4c0"} Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.778555 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318","Type":"ContainerStarted","Data":"a27112fae4d522722923256283cbae2e18347114de6e206a6aee43d193c21542"} Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.781518 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5dbbf03-2962-4e7c-9e79-516a9222cabc","Type":"ContainerStarted","Data":"494b150f8213f7d81925233fd9042a4ed5ad409b3d15e46b16a08c3ff4530b37"} Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.785068 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95b7cbf8-2z94s" event={"ID":"063f0d1e-24d4-4421-a966-da4be852e431","Type":"ContainerStarted","Data":"36c7159d1fe6afb30dc90154654e72f1a7913bb52141fb1d150f1ad9636eec31"} Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.820055 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.8200187549999995 podStartE2EDuration="7.820018755s" podCreationTimestamp="2025-09-29 10:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:24.806967043 +0000 UTC m=+1360.662895081" watchObservedRunningTime="2025-09-29 10:00:24.820018755 +0000 UTC m=+1360.675946773" Sep 29 10:00:24 crc kubenswrapper[4991]: I0929 10:00:24.968141 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56" path="/var/lib/kubelet/pods/74f88cf7-a6f9-41ef-bbf4-4cf8ae7ecf56/volumes" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.105672 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.743035 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.777997 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.806391 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.807895 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.813061 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.813345 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.876939 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.889038 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.903117 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.903273 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.907249 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.907479 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.923653 4991 generic.go:334] "Generic (PLEG): container finished" podID="063f0d1e-24d4-4421-a966-da4be852e431" containerID="122ba6811e165fbf3daca4e2dab2dd395c7d51574c1e8432260602c6f772d46c" exitCode=1 Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.923734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95b7cbf8-2z94s" event={"ID":"063f0d1e-24d4-4421-a966-da4be852e431","Type":"ContainerDied","Data":"122ba6811e165fbf3daca4e2dab2dd395c7d51574c1e8432260602c6f772d46c"} Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925437 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925537 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925567 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925676 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvhl\" (UniqueName: \"kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.925752 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.926129 4991 scope.go:117] "RemoveContainer" containerID="122ba6811e165fbf3daca4e2dab2dd395c7d51574c1e8432260602c6f772d46c" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.935767 4991 generic.go:334] "Generic (PLEG): container finished" podID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerID="016b8e1d763275eb6144df3d1e54d6fda36809804451813a47e7d785f61e0d57" exitCode=1 Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.935841 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66948fffc6-z52jl" event={"ID":"b10260c4-1d6b-4b35-a1f2-40e0b034a659","Type":"ContainerDied","Data":"016b8e1d763275eb6144df3d1e54d6fda36809804451813a47e7d785f61e0d57"} Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.936668 4991 scope.go:117] "RemoveContainer" containerID="016b8e1d763275eb6144df3d1e54d6fda36809804451813a47e7d785f61e0d57" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.954058 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5fb95b6d-nzndm" event={"ID":"9552748a-88cb-46f0-8fe3-545d50c9204c","Type":"ContainerStarted","Data":"db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d"} Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.954135 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:25 crc kubenswrapper[4991]: I0929 10:00:25.960856 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerStarted","Data":"00c7baafda50bfa72a612938e43db1e031a0f50fb652a5102a2fae1f226739d1"} Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.008376 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2c1d0fd8-5e0c-4dde-b023-b288e5fc6318","Type":"ContainerStarted","Data":"b67d731d538971e9a15f6b6e775cf4e016bff2fd523d12354a4652672a123dec"} Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.008801 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" podUID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" containerName="heat-cfnapi" containerID="cri-o://8eb6aa3af545ffc234f76dda2f9fffb196f0aa9c201c710903838a7cf54d8680" gracePeriod=60 Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.010716 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-85d44b87d9-sb6h8" podUID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" containerName="heat-api" containerID="cri-o://22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73" gracePeriod=60 Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030251 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhk7m\" (UniqueName: \"kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030353 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030381 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030410 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030452 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wvhl\" (UniqueName: \"kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030483 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030527 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030619 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030790 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030831 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030886 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.030902 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.049851 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.065244 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6f5fb95b6d-nzndm" podStartSLOduration=4.065219671 podStartE2EDuration="4.065219671s" podCreationTimestamp="2025-09-29 10:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:26.004561258 +0000 UTC m=+1361.860489286" watchObservedRunningTime="2025-09-29 10:00:26.065219671 +0000 UTC m=+1361.921147699" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.077482 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.079988 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.079969718 podStartE2EDuration="6.079969718s" podCreationTimestamp="2025-09-29 10:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:26.047460004 +0000 UTC m=+1361.903388052" watchObservedRunningTime="2025-09-29 10:00:26.079969718 +0000 UTC m=+1361.935897746" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.082763 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wvhl\" (UniqueName: \"kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.084864 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.085234 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.088783 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom\") pod \"heat-api-645f5d8c84-9bf84\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.133567 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhk7m\" (UniqueName: \"kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.133662 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.133716 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.133883 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.134060 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.134082 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.139876 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.144739 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.153690 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.174466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.177994 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.179316 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhk7m\" (UniqueName: \"kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m\") pod \"heat-cfnapi-84db4bfdbb-5j5lm\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.225856 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:26 crc kubenswrapper[4991]: I0929 10:00:26.684879 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.028683 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.057315 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95b7cbf8-2z94s" event={"ID":"063f0d1e-24d4-4421-a966-da4be852e431","Type":"ContainerStarted","Data":"b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.058837 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.067979 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data\") pod \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.068110 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle\") pod \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.068239 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrvx\" (UniqueName: \"kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx\") pod \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.068286 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom\") pod \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\" (UID: \"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.087343 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" (UID: "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.098357 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx" (OuterVolumeSpecName: "kube-api-access-nbrvx") pod "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" (UID: "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5"). InnerVolumeSpecName "kube-api-access-nbrvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.103411 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-f95b7cbf8-2z94s" podStartSLOduration=5.103383617 podStartE2EDuration="5.103383617s" podCreationTimestamp="2025-09-29 10:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:27.078604317 +0000 UTC m=+1362.934532345" watchObservedRunningTime="2025-09-29 10:00:27.103383617 +0000 UTC m=+1362.959311645" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.118688 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66948fffc6-z52jl" event={"ID":"b10260c4-1d6b-4b35-a1f2-40e0b034a659","Type":"ContainerStarted","Data":"c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.120320 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.128037 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerStarted","Data":"1c6f9807e1c286c8c157a80e08a3fb7646ec4f505ffdcb29464854440765c1aa"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.129926 4991 generic.go:334] "Generic (PLEG): container finished" podID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" containerID="22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73" exitCode=0 Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.130033 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85d44b87d9-sb6h8" event={"ID":"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5","Type":"ContainerDied","Data":"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.130056 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85d44b87d9-sb6h8" event={"ID":"d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5","Type":"ContainerDied","Data":"5d11a813caa84e69db8b24be6b3a36dd7fe116b6d5550be7d9436014576f2ce9"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.130078 4991 scope.go:117] "RemoveContainer" containerID="22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.130253 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85d44b87d9-sb6h8" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.146417 4991 generic.go:334] "Generic (PLEG): container finished" podID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" containerID="8eb6aa3af545ffc234f76dda2f9fffb196f0aa9c201c710903838a7cf54d8680" exitCode=0 Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.146490 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" event={"ID":"1426bfad-c3dc-4e90-b43b-7720410b6fbf","Type":"ContainerDied","Data":"8eb6aa3af545ffc234f76dda2f9fffb196f0aa9c201c710903838a7cf54d8680"} Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.154976 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66948fffc6-z52jl" podStartSLOduration=5.154935702 podStartE2EDuration="5.154935702s" podCreationTimestamp="2025-09-29 10:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:27.137578336 +0000 UTC m=+1362.993506364" watchObservedRunningTime="2025-09-29 10:00:27.154935702 +0000 UTC m=+1363.010863730" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.166152 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" (UID: "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.177037 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.177079 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbrvx\" (UniqueName: \"kubernetes.io/projected/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-kube-api-access-nbrvx\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.177093 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.177481 4991 scope.go:117] "RemoveContainer" containerID="22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73" Sep 29 10:00:27 crc kubenswrapper[4991]: E0929 10:00:27.181511 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73\": container with ID starting with 22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73 not found: ID does not exist" containerID="22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.181566 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73"} err="failed to get container status \"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73\": rpc error: code = NotFound desc = could not find container \"22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73\": container with ID starting with 22642117f9312c9fd11e1cec59185adfb82a3efef260de7f91e9010a9a4acf73 not found: ID does not exist" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.195778 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data" (OuterVolumeSpecName: "config-data") pod "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" (UID: "d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.281625 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.380546 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.436817 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.612243 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.612296 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.660859 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.665856 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.706193 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.789910 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.805641 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-85d44b87d9-sb6h8"] Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.908678 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data\") pod \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.908762 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l4hg\" (UniqueName: \"kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg\") pod \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.908961 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle\") pod \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.908982 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom\") pod \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\" (UID: \"1426bfad-c3dc-4e90-b43b-7720410b6fbf\") " Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.919029 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1426bfad-c3dc-4e90-b43b-7720410b6fbf" (UID: "1426bfad-c3dc-4e90-b43b-7720410b6fbf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.937289 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg" (OuterVolumeSpecName: "kube-api-access-6l4hg") pod "1426bfad-c3dc-4e90-b43b-7720410b6fbf" (UID: "1426bfad-c3dc-4e90-b43b-7720410b6fbf"). InnerVolumeSpecName "kube-api-access-6l4hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.938370 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:27 crc kubenswrapper[4991]: I0929 10:00:27.975135 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1426bfad-c3dc-4e90-b43b-7720410b6fbf" (UID: "1426bfad-c3dc-4e90-b43b-7720410b6fbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.014412 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l4hg\" (UniqueName: \"kubernetes.io/projected/1426bfad-c3dc-4e90-b43b-7720410b6fbf-kube-api-access-6l4hg\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.014448 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.014457 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.024665 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data" (OuterVolumeSpecName: "config-data") pod "1426bfad-c3dc-4e90-b43b-7720410b6fbf" (UID: "1426bfad-c3dc-4e90-b43b-7720410b6fbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.115693 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1426bfad-c3dc-4e90-b43b-7720410b6fbf-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.157311 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-645f5d8c84-9bf84" event={"ID":"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87","Type":"ContainerStarted","Data":"a0a7c75294a753f8c07c39a72c9a219139dd1931cfdb11c3299c1322ddf1e811"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.157378 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-645f5d8c84-9bf84" event={"ID":"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87","Type":"ContainerStarted","Data":"a33e4fc855f8a1238dc10d4e56796084d0dd97382c04cefc29e433d528d99537"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.157422 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.161704 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" event={"ID":"1426bfad-c3dc-4e90-b43b-7720410b6fbf","Type":"ContainerDied","Data":"7855ad387de8fbc3e1c30f53ce86a31d6554ab500dcc2ef6aa83c01895845d36"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.161754 4991 scope.go:117] "RemoveContainer" containerID="8eb6aa3af545ffc234f76dda2f9fffb196f0aa9c201c710903838a7cf54d8680" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.161868 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7dd8c47754-fht6g" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.167528 4991 generic.go:334] "Generic (PLEG): container finished" podID="063f0d1e-24d4-4421-a966-da4be852e431" containerID="b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1" exitCode=1 Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.167617 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95b7cbf8-2z94s" event={"ID":"063f0d1e-24d4-4421-a966-da4be852e431","Type":"ContainerDied","Data":"b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.168545 4991 scope.go:117] "RemoveContainer" containerID="b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1" Sep 29 10:00:28 crc kubenswrapper[4991]: E0929 10:00:28.168901 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95b7cbf8-2z94s_openstack(063f0d1e-24d4-4421-a966-da4be852e431)\"" pod="openstack/heat-api-f95b7cbf8-2z94s" podUID="063f0d1e-24d4-4421-a966-da4be852e431" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.171779 4991 generic.go:334] "Generic (PLEG): container finished" podID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerID="c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3" exitCode=1 Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.171841 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66948fffc6-z52jl" event={"ID":"b10260c4-1d6b-4b35-a1f2-40e0b034a659","Type":"ContainerDied","Data":"c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.172638 4991 scope.go:117] "RemoveContainer" containerID="c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3" Sep 29 10:00:28 crc kubenswrapper[4991]: E0929 10:00:28.172887 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66948fffc6-z52jl_openstack(b10260c4-1d6b-4b35-a1f2-40e0b034a659)\"" pod="openstack/heat-cfnapi-66948fffc6-z52jl" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.176146 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerStarted","Data":"0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.180059 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" event={"ID":"4a949473-c3a5-4081-be57-a23bbaaa098e","Type":"ContainerStarted","Data":"8959703c1ae4ae8f5bf1100ea0d579decda39a32617adfbed676c41e7de4b287"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.180095 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" event={"ID":"4a949473-c3a5-4081-be57-a23bbaaa098e","Type":"ContainerStarted","Data":"2442a0ac559c1229212e8bd105d8ce0be9790a9c59508f54710e18183501b9ed"} Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.180110 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.180128 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.180189 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.202840 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-645f5d8c84-9bf84" podStartSLOduration=3.202819424 podStartE2EDuration="3.202819424s" podCreationTimestamp="2025-09-29 10:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:28.185463408 +0000 UTC m=+1364.041391426" watchObservedRunningTime="2025-09-29 10:00:28.202819424 +0000 UTC m=+1364.058747452" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.213385 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.231117 4991 scope.go:117] "RemoveContainer" containerID="122ba6811e165fbf3daca4e2dab2dd395c7d51574c1e8432260602c6f772d46c" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.237910 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.273422 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.279032 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7dd8c47754-fht6g"] Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.406550 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" podStartSLOduration=3.406531785 podStartE2EDuration="3.406531785s" podCreationTimestamp="2025-09-29 10:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:28.306355924 +0000 UTC m=+1364.162283952" watchObservedRunningTime="2025-09-29 10:00:28.406531785 +0000 UTC m=+1364.262459813" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.427058 4991 scope.go:117] "RemoveContainer" containerID="016b8e1d763275eb6144df3d1e54d6fda36809804451813a47e7d785f61e0d57" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.626227 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.962902 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" path="/var/lib/kubelet/pods/1426bfad-c3dc-4e90-b43b-7720410b6fbf/volumes" Sep 29 10:00:28 crc kubenswrapper[4991]: I0929 10:00:28.965635 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" path="/var/lib/kubelet/pods/d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5/volumes" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.208971 4991 scope.go:117] "RemoveContainer" containerID="c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3" Sep 29 10:00:29 crc kubenswrapper[4991]: E0929 10:00:29.209362 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66948fffc6-z52jl_openstack(b10260c4-1d6b-4b35-a1f2-40e0b034a659)\"" pod="openstack/heat-cfnapi-66948fffc6-z52jl" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.215141 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerStarted","Data":"4713d0e1d608d533ddcf462360b13695d78e2be89509b9ce3088cb0aee57cc06"} Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.224499 4991 scope.go:117] "RemoveContainer" containerID="b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1" Sep 29 10:00:29 crc kubenswrapper[4991]: E0929 10:00:29.224746 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95b7cbf8-2z94s_openstack(063f0d1e-24d4-4421-a966-da4be852e431)\"" pod="openstack/heat-api-f95b7cbf8-2z94s" podUID="063f0d1e-24d4-4421-a966-da4be852e431" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.300000 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wf6zr"] Sep 29 10:00:29 crc kubenswrapper[4991]: E0929 10:00:29.300487 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" containerName="heat-cfnapi" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.300505 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" containerName="heat-cfnapi" Sep 29 10:00:29 crc kubenswrapper[4991]: E0929 10:00:29.300533 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" containerName="heat-api" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.300540 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" containerName="heat-api" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.300756 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d6f6e0-b63a-4a01-a7ce-cc746cea63e5" containerName="heat-api" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.300786 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1426bfad-c3dc-4e90-b43b-7720410b6fbf" containerName="heat-cfnapi" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.301571 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.340015 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wf6zr"] Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.387125 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ctg5h"] Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.388697 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.399930 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctg5h"] Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.426285 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvpn\" (UniqueName: \"kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn\") pod \"nova-cell0-db-create-ctg5h\" (UID: \"acc26898-60ac-4c45-9225-01223289f940\") " pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.426364 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj4vb\" (UniqueName: \"kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb\") pod \"nova-api-db-create-wf6zr\" (UID: \"12e442c8-9bfb-4371-a72e-d5955aa090ff\") " pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.476525 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7vj6k"] Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.478139 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.493049 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7vj6k"] Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.529798 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvpn\" (UniqueName: \"kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn\") pod \"nova-cell0-db-create-ctg5h\" (UID: \"acc26898-60ac-4c45-9225-01223289f940\") " pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.529922 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj4vb\" (UniqueName: \"kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb\") pod \"nova-api-db-create-wf6zr\" (UID: \"12e442c8-9bfb-4371-a72e-d5955aa090ff\") " pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.530030 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq9lb\" (UniqueName: \"kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb\") pod \"nova-cell1-db-create-7vj6k\" (UID: \"90361b49-721c-4048-982c-8ed28d5e12e4\") " pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.555660 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvpn\" (UniqueName: \"kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn\") pod \"nova-cell0-db-create-ctg5h\" (UID: \"acc26898-60ac-4c45-9225-01223289f940\") " pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.557840 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj4vb\" (UniqueName: \"kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb\") pod \"nova-api-db-create-wf6zr\" (UID: \"12e442c8-9bfb-4371-a72e-d5955aa090ff\") " pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.632416 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq9lb\" (UniqueName: \"kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb\") pod \"nova-cell1-db-create-7vj6k\" (UID: \"90361b49-721c-4048-982c-8ed28d5e12e4\") " pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.637529 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.649657 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq9lb\" (UniqueName: \"kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb\") pod \"nova-cell1-db-create-7vj6k\" (UID: \"90361b49-721c-4048-982c-8ed28d5e12e4\") " pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.713255 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:29 crc kubenswrapper[4991]: I0929 10:00:29.806533 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.188904 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wf6zr"] Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.277028 4991 scope.go:117] "RemoveContainer" containerID="c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3" Sep 29 10:00:30 crc kubenswrapper[4991]: E0929 10:00:30.277555 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66948fffc6-z52jl_openstack(b10260c4-1d6b-4b35-a1f2-40e0b034a659)\"" pod="openstack/heat-cfnapi-66948fffc6-z52jl" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.278049 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wf6zr" event={"ID":"12e442c8-9bfb-4371-a72e-d5955aa090ff","Type":"ContainerStarted","Data":"8ec47ddd45cf09017bcca2ddc05f38260d9d7c1b12c828ac4adf7407e107a67f"} Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.278123 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.278134 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.280120 4991 scope.go:117] "RemoveContainer" containerID="b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1" Sep 29 10:00:30 crc kubenswrapper[4991]: E0929 10:00:30.280396 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95b7cbf8-2z94s_openstack(063f0d1e-24d4-4421-a966-da4be852e431)\"" pod="openstack/heat-api-f95b7cbf8-2z94s" podUID="063f0d1e-24d4-4421-a966-da4be852e431" Sep 29 10:00:30 crc kubenswrapper[4991]: W0929 10:00:30.539176 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacc26898_60ac_4c45_9225_01223289f940.slice/crio-89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e WatchSource:0}: Error finding container 89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e: Status 404 returned error can't find the container with id 89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.557817 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ctg5h"] Sep 29 10:00:30 crc kubenswrapper[4991]: I0929 10:00:30.786267 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7vj6k"] Sep 29 10:00:30 crc kubenswrapper[4991]: W0929 10:00:30.811287 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90361b49_721c_4048_982c_8ed28d5e12e4.slice/crio-99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded WatchSource:0}: Error finding container 99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded: Status 404 returned error can't find the container with id 99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:30.999423 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:30.999467 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.056497 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.062733 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.305262 4991 generic.go:334] "Generic (PLEG): container finished" podID="acc26898-60ac-4c45-9225-01223289f940" containerID="17a33c998709d5646413299975c5067ad5059866f91f86c3c9efd7da470e461d" exitCode=0 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.305341 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctg5h" event={"ID":"acc26898-60ac-4c45-9225-01223289f940","Type":"ContainerDied","Data":"17a33c998709d5646413299975c5067ad5059866f91f86c3c9efd7da470e461d"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.305374 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctg5h" event={"ID":"acc26898-60ac-4c45-9225-01223289f940","Type":"ContainerStarted","Data":"89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.319390 4991 generic.go:334] "Generic (PLEG): container finished" podID="12e442c8-9bfb-4371-a72e-d5955aa090ff" containerID="6ec8d2a158fd054320043e19235f11ebeae29cf93c59628fc25b9d1f94010966" exitCode=0 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.319544 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wf6zr" event={"ID":"12e442c8-9bfb-4371-a72e-d5955aa090ff","Type":"ContainerDied","Data":"6ec8d2a158fd054320043e19235f11ebeae29cf93c59628fc25b9d1f94010966"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.339209 4991 generic.go:334] "Generic (PLEG): container finished" podID="90361b49-721c-4048-982c-8ed28d5e12e4" containerID="1789c205c8ab9539bea6c9f107bec91dca9f24c9039f9b7317d3171f9b107402" exitCode=0 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.339532 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7vj6k" event={"ID":"90361b49-721c-4048-982c-8ed28d5e12e4","Type":"ContainerDied","Data":"1789c205c8ab9539bea6c9f107bec91dca9f24c9039f9b7317d3171f9b107402"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.339682 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7vj6k" event={"ID":"90361b49-721c-4048-982c-8ed28d5e12e4","Type":"ContainerStarted","Data":"99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.382452 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-central-agent" containerID="cri-o://1c6f9807e1c286c8c157a80e08a3fb7646ec4f505ffdcb29464854440765c1aa" gracePeriod=30 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383140 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="proxy-httpd" containerID="cri-o://4558fcb7f81ac2643601e353fd7028e5724c77735b170659af254f4cefff02ee" gracePeriod=30 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383197 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="sg-core" containerID="cri-o://4713d0e1d608d533ddcf462360b13695d78e2be89509b9ce3088cb0aee57cc06" gracePeriod=30 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383238 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-notification-agent" containerID="cri-o://0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179" gracePeriod=30 Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383260 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerStarted","Data":"4558fcb7f81ac2643601e353fd7028e5724c77735b170659af254f4cefff02ee"} Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383333 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383346 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.383371 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.415744 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.209576892 podStartE2EDuration="8.415722931s" podCreationTimestamp="2025-09-29 10:00:23 +0000 UTC" firstStartedPulling="2025-09-29 10:00:25.034501389 +0000 UTC m=+1360.890429427" lastFinishedPulling="2025-09-29 10:00:30.240647438 +0000 UTC m=+1366.096575466" observedRunningTime="2025-09-29 10:00:31.402207616 +0000 UTC m=+1367.258135654" watchObservedRunningTime="2025-09-29 10:00:31.415722931 +0000 UTC m=+1367.271650959" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.589187 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.589351 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.593660 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 29 10:00:31 crc kubenswrapper[4991]: E0929 10:00:31.834892 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f806a03_4360_4588_bd39_3a58058c26a5.slice/crio-0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:00:31 crc kubenswrapper[4991]: I0929 10:00:31.997527 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.061697 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.062448 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="dnsmasq-dns" containerID="cri-o://5dd6bcac4175475de3c1a3ffdeca1e534e905ec85e16ffec7592d36b8dee0549" gracePeriod=10 Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.401109 4991 generic.go:334] "Generic (PLEG): container finished" podID="0b587587-d964-446d-9354-d1556ce80381" containerID="5dd6bcac4175475de3c1a3ffdeca1e534e905ec85e16ffec7592d36b8dee0549" exitCode=0 Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.401197 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" event={"ID":"0b587587-d964-446d-9354-d1556ce80381","Type":"ContainerDied","Data":"5dd6bcac4175475de3c1a3ffdeca1e534e905ec85e16ffec7592d36b8dee0549"} Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412025 4991 generic.go:334] "Generic (PLEG): container finished" podID="5f806a03-4360-4588-bd39-3a58058c26a5" containerID="4558fcb7f81ac2643601e353fd7028e5724c77735b170659af254f4cefff02ee" exitCode=0 Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412056 4991 generic.go:334] "Generic (PLEG): container finished" podID="5f806a03-4360-4588-bd39-3a58058c26a5" containerID="4713d0e1d608d533ddcf462360b13695d78e2be89509b9ce3088cb0aee57cc06" exitCode=2 Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412064 4991 generic.go:334] "Generic (PLEG): container finished" podID="5f806a03-4360-4588-bd39-3a58058c26a5" containerID="0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179" exitCode=0 Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerDied","Data":"4558fcb7f81ac2643601e353fd7028e5724c77735b170659af254f4cefff02ee"} Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412305 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerDied","Data":"4713d0e1d608d533ddcf462360b13695d78e2be89509b9ce3088cb0aee57cc06"} Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.412318 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerDied","Data":"0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179"} Sep 29 10:00:32 crc kubenswrapper[4991]: I0929 10:00:32.955542 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.062649 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77s8h\" (UniqueName: \"kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.062715 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.062776 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.062821 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.062906 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.063319 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc\") pod \"0b587587-d964-446d-9354-d1556ce80381\" (UID: \"0b587587-d964-446d-9354-d1556ce80381\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.077477 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h" (OuterVolumeSpecName: "kube-api-access-77s8h") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "kube-api-access-77s8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.169324 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77s8h\" (UniqueName: \"kubernetes.io/projected/0b587587-d964-446d-9354-d1556ce80381-kube-api-access-77s8h\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.196293 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.281003 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.302491 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.305211 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.378635 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config" (OuterVolumeSpecName: "config") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.379026 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b587587-d964-446d-9354-d1556ce80381" (UID: "0b587587-d964-446d-9354-d1556ce80381"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.389507 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.389536 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-config\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.389545 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.389555 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b587587-d964-446d-9354-d1556ce80381-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.406754 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.414974 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.450982 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" event={"ID":"0b587587-d964-446d-9354-d1556ce80381","Type":"ContainerDied","Data":"ef5a822aa3b68256f3c182225701833ccda6fb5534bacad67ab20ce2fbcb67f7"} Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.451065 4991 scope.go:117] "RemoveContainer" containerID="5dd6bcac4175475de3c1a3ffdeca1e534e905ec85e16ffec7592d36b8dee0549" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.451324 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-sw5l5" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.459755 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7vj6k" event={"ID":"90361b49-721c-4048-982c-8ed28d5e12e4","Type":"ContainerDied","Data":"99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded"} Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.459804 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99032991046e3b10af42f787df7f0a2da0ef63f72d3be9d7bb566d5b410dbded" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.459878 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7vj6k" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.472897 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.472930 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.474002 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ctg5h" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.474164 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ctg5h" event={"ID":"acc26898-60ac-4c45-9225-01223289f940","Type":"ContainerDied","Data":"89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e"} Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.474189 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89fda8d6637bc1f0bc17330f128427583d6e24c6fc413d5efeeb3c0f913cfe7e" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.489032 4991 scope.go:117] "RemoveContainer" containerID="4bbcd8eebd77311eab1ef37c76c35f5b03d64fb36820869510812e42b1019ef3" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.490617 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvpn\" (UniqueName: \"kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn\") pod \"acc26898-60ac-4c45-9225-01223289f940\" (UID: \"acc26898-60ac-4c45-9225-01223289f940\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.490768 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq9lb\" (UniqueName: \"kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb\") pod \"90361b49-721c-4048-982c-8ed28d5e12e4\" (UID: \"90361b49-721c-4048-982c-8ed28d5e12e4\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.498676 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn" (OuterVolumeSpecName: "kube-api-access-rnvpn") pod "acc26898-60ac-4c45-9225-01223289f940" (UID: "acc26898-60ac-4c45-9225-01223289f940"). InnerVolumeSpecName "kube-api-access-rnvpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.502291 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvpn\" (UniqueName: \"kubernetes.io/projected/acc26898-60ac-4c45-9225-01223289f940-kube-api-access-rnvpn\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.513555 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb" (OuterVolumeSpecName: "kube-api-access-fq9lb") pod "90361b49-721c-4048-982c-8ed28d5e12e4" (UID: "90361b49-721c-4048-982c-8ed28d5e12e4"). InnerVolumeSpecName "kube-api-access-fq9lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.527245 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.546384 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-sw5l5"] Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.604660 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq9lb\" (UniqueName: \"kubernetes.io/projected/90361b49-721c-4048-982c-8ed28d5e12e4-kube-api-access-fq9lb\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.690684 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.808549 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj4vb\" (UniqueName: \"kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb\") pod \"12e442c8-9bfb-4371-a72e-d5955aa090ff\" (UID: \"12e442c8-9bfb-4371-a72e-d5955aa090ff\") " Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.817233 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb" (OuterVolumeSpecName: "kube-api-access-tj4vb") pod "12e442c8-9bfb-4371-a72e-d5955aa090ff" (UID: "12e442c8-9bfb-4371-a72e-d5955aa090ff"). InnerVolumeSpecName "kube-api-access-tj4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:33 crc kubenswrapper[4991]: I0929 10:00:33.911474 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj4vb\" (UniqueName: \"kubernetes.io/projected/12e442c8-9bfb-4371-a72e-d5955aa090ff-kube-api-access-tj4vb\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.489286 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wf6zr" event={"ID":"12e442c8-9bfb-4371-a72e-d5955aa090ff","Type":"ContainerDied","Data":"8ec47ddd45cf09017bcca2ddc05f38260d9d7c1b12c828ac4adf7407e107a67f"} Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.489500 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec47ddd45cf09017bcca2ddc05f38260d9d7c1b12c828ac4adf7407e107a67f" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.489345 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wf6zr" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.775064 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.775198 4991 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.814897 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 29 10:00:34 crc kubenswrapper[4991]: I0929 10:00:34.957183 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b587587-d964-446d-9354-d1556ce80381" path="/var/lib/kubelet/pods/0b587587-d964-446d-9354-d1556ce80381/volumes" Sep 29 10:00:36 crc kubenswrapper[4991]: I0929 10:00:36.940137 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:37 crc kubenswrapper[4991]: I0929 10:00:37.917895 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:00:37 crc kubenswrapper[4991]: I0929 10:00:37.986758 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.111872 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.200142 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.748349 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.849618 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom\") pod \"063f0d1e-24d4-4421-a966-da4be852e431\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.849710 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data\") pod \"063f0d1e-24d4-4421-a966-da4be852e431\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.849868 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle\") pod \"063f0d1e-24d4-4421-a966-da4be852e431\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.849902 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntm9\" (UniqueName: \"kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9\") pod \"063f0d1e-24d4-4421-a966-da4be852e431\" (UID: \"063f0d1e-24d4-4421-a966-da4be852e431\") " Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.858336 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9" (OuterVolumeSpecName: "kube-api-access-vntm9") pod "063f0d1e-24d4-4421-a966-da4be852e431" (UID: "063f0d1e-24d4-4421-a966-da4be852e431"). InnerVolumeSpecName "kube-api-access-vntm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.860711 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "063f0d1e-24d4-4421-a966-da4be852e431" (UID: "063f0d1e-24d4-4421-a966-da4be852e431"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.897057 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "063f0d1e-24d4-4421-a966-da4be852e431" (UID: "063f0d1e-24d4-4421-a966-da4be852e431"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.929758 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data" (OuterVolumeSpecName: "config-data") pod "063f0d1e-24d4-4421-a966-da4be852e431" (UID: "063f0d1e-24d4-4421-a966-da4be852e431"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.960489 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.960519 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.960530 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063f0d1e-24d4-4421-a966-da4be852e431-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.960539 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntm9\" (UniqueName: \"kubernetes.io/projected/063f0d1e-24d4-4421-a966-da4be852e431-kube-api-access-vntm9\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:38 crc kubenswrapper[4991]: I0929 10:00:38.969832 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.060921 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle\") pod \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.061358 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data\") pod \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.061389 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7dq7\" (UniqueName: \"kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7\") pod \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.061434 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom\") pod \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\" (UID: \"b10260c4-1d6b-4b35-a1f2-40e0b034a659\") " Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.065427 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b10260c4-1d6b-4b35-a1f2-40e0b034a659" (UID: "b10260c4-1d6b-4b35-a1f2-40e0b034a659"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.066794 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7" (OuterVolumeSpecName: "kube-api-access-m7dq7") pod "b10260c4-1d6b-4b35-a1f2-40e0b034a659" (UID: "b10260c4-1d6b-4b35-a1f2-40e0b034a659"). InnerVolumeSpecName "kube-api-access-m7dq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.096553 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b10260c4-1d6b-4b35-a1f2-40e0b034a659" (UID: "b10260c4-1d6b-4b35-a1f2-40e0b034a659"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.135986 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data" (OuterVolumeSpecName: "config-data") pod "b10260c4-1d6b-4b35-a1f2-40e0b034a659" (UID: "b10260c4-1d6b-4b35-a1f2-40e0b034a659"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.163937 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.164178 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.164299 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7dq7\" (UniqueName: \"kubernetes.io/projected/b10260c4-1d6b-4b35-a1f2-40e0b034a659-kube-api-access-m7dq7\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.164383 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b10260c4-1d6b-4b35-a1f2-40e0b034a659-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.465197 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a89b-account-create-8qtf5"] Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466069 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466197 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466292 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="dnsmasq-dns" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466360 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="dnsmasq-dns" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466433 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc26898-60ac-4c45-9225-01223289f940" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466503 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc26898-60ac-4c45-9225-01223289f940" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466588 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466656 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466741 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90361b49-721c-4048-982c-8ed28d5e12e4" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466811 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="90361b49-721c-4048-982c-8ed28d5e12e4" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.466903 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="init" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.466988 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="init" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.467073 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e442c8-9bfb-4371-a72e-d5955aa090ff" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467137 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e442c8-9bfb-4371-a72e-d5955aa090ff" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467475 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467575 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc26898-60ac-4c45-9225-01223289f940" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467661 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467730 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467810 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e442c8-9bfb-4371-a72e-d5955aa090ff" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467889 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b587587-d964-446d-9354-d1556ce80381" containerName="dnsmasq-dns" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.467974 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="90361b49-721c-4048-982c-8ed28d5e12e4" containerName="mariadb-database-create" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.469004 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.471211 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.478534 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a89b-account-create-8qtf5"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.553293 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95b7cbf8-2z94s" event={"ID":"063f0d1e-24d4-4421-a966-da4be852e431","Type":"ContainerDied","Data":"36c7159d1fe6afb30dc90154654e72f1a7913bb52141fb1d150f1ad9636eec31"} Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.553366 4991 scope.go:117] "RemoveContainer" containerID="b79e57fb6dc691d3bfd02a70d3e90aad23867628e5700d96ac39ed63a207d8f1" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.553505 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95b7cbf8-2z94s" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.559485 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66948fffc6-z52jl" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.559405 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66948fffc6-z52jl" event={"ID":"b10260c4-1d6b-4b35-a1f2-40e0b034a659","Type":"ContainerDied","Data":"618ff27a1fac1e0b59b8e29cfc434bfd1fb7bca89235d600f9d14d981c824f9f"} Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.575726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7br\" (UniqueName: \"kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br\") pod \"nova-api-a89b-account-create-8qtf5\" (UID: \"5c526f97-b986-4101-be9d-1bc70bbc93f5\") " pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.584510 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.596925 4991 scope.go:117] "RemoveContainer" containerID="c361ddfe8520192a99be4f084b9f0a8d40fe82ac3cffbc9a0bfab950a7afaea3" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.599351 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-f95b7cbf8-2z94s"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.609683 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.643359 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66948fffc6-z52jl"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.670881 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6d58-account-create-l5cjc"] Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.671439 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.671458 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" containerName="heat-cfnapi" Sep 29 10:00:39 crc kubenswrapper[4991]: E0929 10:00:39.671492 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.671499 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.671772 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="063f0d1e-24d4-4421-a966-da4be852e431" containerName="heat-api" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.672719 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.677555 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br7br\" (UniqueName: \"kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br\") pod \"nova-api-a89b-account-create-8qtf5\" (UID: \"5c526f97-b986-4101-be9d-1bc70bbc93f5\") " pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.677655 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.698669 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d58-account-create-l5cjc"] Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.711634 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br7br\" (UniqueName: \"kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br\") pod \"nova-api-a89b-account-create-8qtf5\" (UID: \"5c526f97-b986-4101-be9d-1bc70bbc93f5\") " pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.780034 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8b9s\" (UniqueName: \"kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s\") pod \"nova-cell1-6d58-account-create-l5cjc\" (UID: \"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e\") " pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.807603 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.882598 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8b9s\" (UniqueName: \"kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s\") pod \"nova-cell1-6d58-account-create-l5cjc\" (UID: \"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e\") " pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:39 crc kubenswrapper[4991]: I0929 10:00:39.907771 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8b9s\" (UniqueName: \"kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s\") pod \"nova-cell1-6d58-account-create-l5cjc\" (UID: \"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e\") " pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.008712 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.329469 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a89b-account-create-8qtf5"] Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.560598 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d58-account-create-l5cjc"] Sep 29 10:00:40 crc kubenswrapper[4991]: W0929 10:00:40.563333 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0a2fea9_1578_4a8a_b9b8_253d3bad7b8e.slice/crio-742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5 WatchSource:0}: Error finding container 742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5: Status 404 returned error can't find the container with id 742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5 Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.578867 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a89b-account-create-8qtf5" event={"ID":"5c526f97-b986-4101-be9d-1bc70bbc93f5","Type":"ContainerStarted","Data":"c1624f3d5e0f874adaa7714fca14eb2e4d7db4fc536775b99ea60cab2da52984"} Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.578916 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a89b-account-create-8qtf5" event={"ID":"5c526f97-b986-4101-be9d-1bc70bbc93f5","Type":"ContainerStarted","Data":"ad2203e26c1bad6820f4481664d636dae4ee0cf9ee292ccbf51da14267c0a5b9"} Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.605054 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a89b-account-create-8qtf5" podStartSLOduration=1.605032807 podStartE2EDuration="1.605032807s" podCreationTimestamp="2025-09-29 10:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:00:40.592541739 +0000 UTC m=+1376.448469767" watchObservedRunningTime="2025-09-29 10:00:40.605032807 +0000 UTC m=+1376.460960835" Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.941482 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063f0d1e-24d4-4421-a966-da4be852e431" path="/var/lib/kubelet/pods/063f0d1e-24d4-4421-a966-da4be852e431/volumes" Sep 29 10:00:40 crc kubenswrapper[4991]: I0929 10:00:40.942125 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10260c4-1d6b-4b35-a1f2-40e0b034a659" path="/var/lib/kubelet/pods/b10260c4-1d6b-4b35-a1f2-40e0b034a659/volumes" Sep 29 10:00:41 crc kubenswrapper[4991]: I0929 10:00:41.591367 4991 generic.go:334] "Generic (PLEG): container finished" podID="b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" containerID="44b3d420c3c6b2107643732df4461a94920ad2803284bb960af5c98e188b0383" exitCode=0 Sep 29 10:00:41 crc kubenswrapper[4991]: I0929 10:00:41.591479 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d58-account-create-l5cjc" event={"ID":"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e","Type":"ContainerDied","Data":"44b3d420c3c6b2107643732df4461a94920ad2803284bb960af5c98e188b0383"} Sep 29 10:00:41 crc kubenswrapper[4991]: I0929 10:00:41.591684 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d58-account-create-l5cjc" event={"ID":"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e","Type":"ContainerStarted","Data":"742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5"} Sep 29 10:00:41 crc kubenswrapper[4991]: I0929 10:00:41.593803 4991 generic.go:334] "Generic (PLEG): container finished" podID="5c526f97-b986-4101-be9d-1bc70bbc93f5" containerID="c1624f3d5e0f874adaa7714fca14eb2e4d7db4fc536775b99ea60cab2da52984" exitCode=0 Sep 29 10:00:41 crc kubenswrapper[4991]: I0929 10:00:41.593849 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a89b-account-create-8qtf5" event={"ID":"5c526f97-b986-4101-be9d-1bc70bbc93f5","Type":"ContainerDied","Data":"c1624f3d5e0f874adaa7714fca14eb2e4d7db4fc536775b99ea60cab2da52984"} Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.509155 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.511704 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.525584 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.554252 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.554297 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.554438 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68wp\" (UniqueName: \"kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.618403 4991 generic.go:334] "Generic (PLEG): container finished" podID="5f806a03-4360-4588-bd39-3a58058c26a5" containerID="1c6f9807e1c286c8c157a80e08a3fb7646ec4f505ffdcb29464854440765c1aa" exitCode=0 Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.618479 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerDied","Data":"1c6f9807e1c286c8c157a80e08a3fb7646ec4f505ffdcb29464854440765c1aa"} Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.673689 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68wp\" (UniqueName: \"kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.673892 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.673918 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.681722 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.682273 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.723278 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68wp\" (UniqueName: \"kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp\") pod \"redhat-operators-wmrc7\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.838654 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:42 crc kubenswrapper[4991]: I0929 10:00:42.999419 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097523 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097605 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2n27\" (UniqueName: \"kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097653 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097766 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097893 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.097996 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts\") pod \"5f806a03-4360-4588-bd39-3a58058c26a5\" (UID: \"5f806a03-4360-4588-bd39-3a58058c26a5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.099208 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.099239 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.107734 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27" (OuterVolumeSpecName: "kube-api-access-h2n27") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "kube-api-access-h2n27". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.114227 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts" (OuterVolumeSpecName: "scripts") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.184056 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.188006 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.225603 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.225651 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2n27\" (UniqueName: \"kubernetes.io/projected/5f806a03-4360-4588-bd39-3a58058c26a5-kube-api-access-h2n27\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.225663 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.225675 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.225684 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f806a03-4360-4588-bd39-3a58058c26a5-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.266297 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.266519 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6cf6b855c9-q9s4l" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerName="heat-engine" containerID="cri-o://e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" gracePeriod=60 Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.276180 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.333012 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data" (OuterVolumeSpecName: "config-data") pod "5f806a03-4360-4588-bd39-3a58058c26a5" (UID: "5f806a03-4360-4588-bd39-3a58058c26a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.341230 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.341260 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f806a03-4360-4588-bd39-3a58058c26a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.411777 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.544967 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br7br\" (UniqueName: \"kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br\") pod \"5c526f97-b986-4101-be9d-1bc70bbc93f5\" (UID: \"5c526f97-b986-4101-be9d-1bc70bbc93f5\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.549295 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br" (OuterVolumeSpecName: "kube-api-access-br7br") pod "5c526f97-b986-4101-be9d-1bc70bbc93f5" (UID: "5c526f97-b986-4101-be9d-1bc70bbc93f5"). InnerVolumeSpecName "kube-api-access-br7br". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.574539 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.638046 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a89b-account-create-8qtf5" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.638541 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a89b-account-create-8qtf5" event={"ID":"5c526f97-b986-4101-be9d-1bc70bbc93f5","Type":"ContainerDied","Data":"ad2203e26c1bad6820f4481664d636dae4ee0cf9ee292ccbf51da14267c0a5b9"} Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.638617 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad2203e26c1bad6820f4481664d636dae4ee0cf9ee292ccbf51da14267c0a5b9" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.642375 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f806a03-4360-4588-bd39-3a58058c26a5","Type":"ContainerDied","Data":"00c7baafda50bfa72a612938e43db1e031a0f50fb652a5102a2fae1f226739d1"} Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.642418 4991 scope.go:117] "RemoveContainer" containerID="4558fcb7f81ac2643601e353fd7028e5724c77735b170659af254f4cefff02ee" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.642555 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.646965 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8b9s\" (UniqueName: \"kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s\") pod \"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e\" (UID: \"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e\") " Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.649984 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br7br\" (UniqueName: \"kubernetes.io/projected/5c526f97-b986-4101-be9d-1bc70bbc93f5-kube-api-access-br7br\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.661647 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s" (OuterVolumeSpecName: "kube-api-access-m8b9s") pod "b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" (UID: "b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e"). InnerVolumeSpecName "kube-api-access-m8b9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.661876 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d58-account-create-l5cjc" event={"ID":"b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e","Type":"ContainerDied","Data":"742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5"} Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.662059 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742a69e1740f6480d3f3f45861f2aff9c9d7e356bde263aaa3c673d9414a34b5" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.662197 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d58-account-create-l5cjc" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.701214 4991 scope.go:117] "RemoveContainer" containerID="4713d0e1d608d533ddcf462360b13695d78e2be89509b9ce3088cb0aee57cc06" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.707711 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.737009 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.752306 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8b9s\" (UniqueName: \"kubernetes.io/projected/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e-kube-api-access-m8b9s\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.758240 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.758854 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-central-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.758892 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-central-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.758915 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="sg-core" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.758921 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="sg-core" Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.758980 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="proxy-httpd" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.758989 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="proxy-httpd" Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.758999 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c526f97-b986-4101-be9d-1bc70bbc93f5" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759005 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c526f97-b986-4101-be9d-1bc70bbc93f5" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.759030 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759036 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: E0929 10:00:43.759065 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-notification-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759072 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-notification-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759263 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-notification-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759275 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="proxy-httpd" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759283 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759294 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c526f97-b986-4101-be9d-1bc70bbc93f5" containerName="mariadb-account-create" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759307 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="sg-core" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.759320 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" containerName="ceilometer-central-agent" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.761595 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.766540 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.768742 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.774898 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.777268 4991 scope.go:117] "RemoveContainer" containerID="0bda53294e49ce28fcad7865b34733ce1157765fa758578a26c7bee90e621179" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.789235 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:00:43 crc kubenswrapper[4991]: W0929 10:00:43.792174 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c860961_6f5b_49e0_8c20_28f6f68c7740.slice/crio-65fc3b89ccd32b945e0a3f78418e3102bfb83df12f9f6232f39ce328b7949962 WatchSource:0}: Error finding container 65fc3b89ccd32b945e0a3f78418e3102bfb83df12f9f6232f39ce328b7949962: Status 404 returned error can't find the container with id 65fc3b89ccd32b945e0a3f78418e3102bfb83df12f9f6232f39ce328b7949962 Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.834339 4991 scope.go:117] "RemoveContainer" containerID="1c6f9807e1c286c8c157a80e08a3fb7646ec4f505ffdcb29464854440765c1aa" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.854243 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.854609 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.854796 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.854847 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.855173 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjn5l\" (UniqueName: \"kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.855420 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.855464 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957438 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957546 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957593 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957620 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957703 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjn5l\" (UniqueName: \"kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957760 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.957780 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.958155 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.958497 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.964545 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.964684 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.967991 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.968876 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:43 crc kubenswrapper[4991]: I0929 10:00:43.978785 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjn5l\" (UniqueName: \"kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l\") pod \"ceilometer-0\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " pod="openstack/ceilometer-0" Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.102649 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:44 crc kubenswrapper[4991]: W0929 10:00:44.640266 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32b95627_883a_4fe0_994f_e94992e34099.slice/crio-7f1fa33ad95437f2a6b4e0cb4735903b1d38f954b1bc6fa3ddce5651b00dd4eb WatchSource:0}: Error finding container 7f1fa33ad95437f2a6b4e0cb4735903b1d38f954b1bc6fa3ddce5651b00dd4eb: Status 404 returned error can't find the container with id 7f1fa33ad95437f2a6b4e0cb4735903b1d38f954b1bc6fa3ddce5651b00dd4eb Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.645821 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.675803 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerStarted","Data":"7f1fa33ad95437f2a6b4e0cb4735903b1d38f954b1bc6fa3ddce5651b00dd4eb"} Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.680146 4991 generic.go:334] "Generic (PLEG): container finished" podID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerID="de1599829b5821b23c2f453202a0e424f9e9ae1375e3061261eb98dc09adddf6" exitCode=0 Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.680475 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerDied","Data":"de1599829b5821b23c2f453202a0e424f9e9ae1375e3061261eb98dc09adddf6"} Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.680596 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerStarted","Data":"65fc3b89ccd32b945e0a3f78418e3102bfb83df12f9f6232f39ce328b7949962"} Sep 29 10:00:44 crc kubenswrapper[4991]: I0929 10:00:44.970547 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f806a03-4360-4588-bd39-3a58058c26a5" path="/var/lib/kubelet/pods/5f806a03-4360-4588-bd39-3a58058c26a5/volumes" Sep 29 10:00:45 crc kubenswrapper[4991]: I0929 10:00:45.712573 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerStarted","Data":"d6326705f86fe7b2100732fc48c771e88ec3bfd117d1786107d4f643ebee9fa3"} Sep 29 10:00:45 crc kubenswrapper[4991]: I0929 10:00:45.715068 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerStarted","Data":"f0586714de813b83a08f5110efab492c1b31cf92da26b916b19a10e2cc702481"} Sep 29 10:00:46 crc kubenswrapper[4991]: I0929 10:00:46.432670 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:46 crc kubenswrapper[4991]: E0929 10:00:46.888349 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:00:46 crc kubenswrapper[4991]: E0929 10:00:46.897894 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:00:46 crc kubenswrapper[4991]: E0929 10:00:46.901688 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:00:46 crc kubenswrapper[4991]: E0929 10:00:46.901766 4991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6cf6b855c9-q9s4l" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerName="heat-engine" Sep 29 10:00:47 crc kubenswrapper[4991]: I0929 10:00:47.738458 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerStarted","Data":"450e4459ca57928632b3d83c65aa2996ce5520ac3ac48859cadaa74c4df9dddd"} Sep 29 10:00:48 crc kubenswrapper[4991]: I0929 10:00:48.750275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerStarted","Data":"2ed374459836f12809a5e44678a4c50c1ee17f73798e64d7466822a204524dea"} Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.559374 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-544e-account-create-jnp4m"] Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.561507 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.565567 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.570928 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-544e-account-create-jnp4m"] Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.729592 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2sb\" (UniqueName: \"kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb\") pod \"nova-cell0-544e-account-create-jnp4m\" (UID: \"2dd62d55-78b1-4af7-867c-96f7e480c3ac\") " pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.763123 4991 generic.go:334] "Generic (PLEG): container finished" podID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerID="f0586714de813b83a08f5110efab492c1b31cf92da26b916b19a10e2cc702481" exitCode=0 Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.763194 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerDied","Data":"f0586714de813b83a08f5110efab492c1b31cf92da26b916b19a10e2cc702481"} Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775521 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerStarted","Data":"784a7b3f48739c5eeee2cdcce2b77635805a0ef019e8374b357c8895eb715930"} Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775694 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-central-agent" containerID="cri-o://d6326705f86fe7b2100732fc48c771e88ec3bfd117d1786107d4f643ebee9fa3" gracePeriod=30 Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775786 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775824 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="proxy-httpd" containerID="cri-o://784a7b3f48739c5eeee2cdcce2b77635805a0ef019e8374b357c8895eb715930" gracePeriod=30 Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775858 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="sg-core" containerID="cri-o://2ed374459836f12809a5e44678a4c50c1ee17f73798e64d7466822a204524dea" gracePeriod=30 Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.775893 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-notification-agent" containerID="cri-o://450e4459ca57928632b3d83c65aa2996ce5520ac3ac48859cadaa74c4df9dddd" gracePeriod=30 Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.814481 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.498666568 podStartE2EDuration="6.814456001s" podCreationTimestamp="2025-09-29 10:00:43 +0000 UTC" firstStartedPulling="2025-09-29 10:00:44.644094883 +0000 UTC m=+1380.500022911" lastFinishedPulling="2025-09-29 10:00:48.959884316 +0000 UTC m=+1384.815812344" observedRunningTime="2025-09-29 10:00:49.802711883 +0000 UTC m=+1385.658639911" watchObservedRunningTime="2025-09-29 10:00:49.814456001 +0000 UTC m=+1385.670384029" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.835255 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d2sb\" (UniqueName: \"kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb\") pod \"nova-cell0-544e-account-create-jnp4m\" (UID: \"2dd62d55-78b1-4af7-867c-96f7e480c3ac\") " pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.856991 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d2sb\" (UniqueName: \"kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb\") pod \"nova-cell0-544e-account-create-jnp4m\" (UID: \"2dd62d55-78b1-4af7-867c-96f7e480c3ac\") " pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:49 crc kubenswrapper[4991]: I0929 10:00:49.928652 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.734132 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-544e-account-create-jnp4m"] Sep 29 10:00:50 crc kubenswrapper[4991]: W0929 10:00:50.743107 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd62d55_78b1_4af7_867c_96f7e480c3ac.slice/crio-7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941 WatchSource:0}: Error finding container 7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941: Status 404 returned error can't find the container with id 7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941 Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.807121 4991 generic.go:334] "Generic (PLEG): container finished" podID="32b95627-883a-4fe0-994f-e94992e34099" containerID="784a7b3f48739c5eeee2cdcce2b77635805a0ef019e8374b357c8895eb715930" exitCode=0 Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.813806 4991 generic.go:334] "Generic (PLEG): container finished" podID="32b95627-883a-4fe0-994f-e94992e34099" containerID="2ed374459836f12809a5e44678a4c50c1ee17f73798e64d7466822a204524dea" exitCode=2 Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.813843 4991 generic.go:334] "Generic (PLEG): container finished" podID="32b95627-883a-4fe0-994f-e94992e34099" containerID="450e4459ca57928632b3d83c65aa2996ce5520ac3ac48859cadaa74c4df9dddd" exitCode=0 Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.807359 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerDied","Data":"784a7b3f48739c5eeee2cdcce2b77635805a0ef019e8374b357c8895eb715930"} Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.814020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerDied","Data":"2ed374459836f12809a5e44678a4c50c1ee17f73798e64d7466822a204524dea"} Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.814041 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerDied","Data":"450e4459ca57928632b3d83c65aa2996ce5520ac3ac48859cadaa74c4df9dddd"} Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.818223 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerStarted","Data":"b8fa6548c4758de37fc554ae8cb20d3a2a8daf4e232729eae3c8d45ba9990e31"} Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.819905 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-544e-account-create-jnp4m" event={"ID":"2dd62d55-78b1-4af7-867c-96f7e480c3ac","Type":"ContainerStarted","Data":"7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941"} Sep 29 10:00:50 crc kubenswrapper[4991]: I0929 10:00:50.859610 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wmrc7" podStartSLOduration=3.257099744 podStartE2EDuration="8.859582382s" podCreationTimestamp="2025-09-29 10:00:42 +0000 UTC" firstStartedPulling="2025-09-29 10:00:44.682659706 +0000 UTC m=+1380.538587734" lastFinishedPulling="2025-09-29 10:00:50.285142344 +0000 UTC m=+1386.141070372" observedRunningTime="2025-09-29 10:00:50.838283702 +0000 UTC m=+1386.694211730" watchObservedRunningTime="2025-09-29 10:00:50.859582382 +0000 UTC m=+1386.715510410" Sep 29 10:00:51 crc kubenswrapper[4991]: I0929 10:00:51.835145 4991 generic.go:334] "Generic (PLEG): container finished" podID="2dd62d55-78b1-4af7-867c-96f7e480c3ac" containerID="b78c6b0f184df95f9752141dbe2e3f8248985d155f3fa8e698132aa55832a83d" exitCode=0 Sep 29 10:00:51 crc kubenswrapper[4991]: I0929 10:00:51.835320 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-544e-account-create-jnp4m" event={"ID":"2dd62d55-78b1-4af7-867c-96f7e480c3ac","Type":"ContainerDied","Data":"b78c6b0f184df95f9752141dbe2e3f8248985d155f3fa8e698132aa55832a83d"} Sep 29 10:00:52 crc kubenswrapper[4991]: I0929 10:00:52.838829 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:52 crc kubenswrapper[4991]: I0929 10:00:52.839223 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.481827 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.633808 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2sb\" (UniqueName: \"kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb\") pod \"2dd62d55-78b1-4af7-867c-96f7e480c3ac\" (UID: \"2dd62d55-78b1-4af7-867c-96f7e480c3ac\") " Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.644001 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb" (OuterVolumeSpecName: "kube-api-access-5d2sb") pod "2dd62d55-78b1-4af7-867c-96f7e480c3ac" (UID: "2dd62d55-78b1-4af7-867c-96f7e480c3ac"). InnerVolumeSpecName "kube-api-access-5d2sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.736714 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d2sb\" (UniqueName: \"kubernetes.io/projected/2dd62d55-78b1-4af7-867c-96f7e480c3ac-kube-api-access-5d2sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.860468 4991 generic.go:334] "Generic (PLEG): container finished" podID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerID="e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" exitCode=0 Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.860563 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b855c9-q9s4l" event={"ID":"e0901bda-8477-431d-b7a4-6c78208f5f13","Type":"ContainerDied","Data":"e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6"} Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.863898 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-544e-account-create-jnp4m" event={"ID":"2dd62d55-78b1-4af7-867c-96f7e480c3ac","Type":"ContainerDied","Data":"7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941"} Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.863942 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d60642c01c675c65e05508ccc24b323dd22ca2d704b83bc9f7b946189465941" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.863970 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-544e-account-create-jnp4m" Sep 29 10:00:53 crc kubenswrapper[4991]: I0929 10:00:53.894484 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmrc7" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="registry-server" probeResult="failure" output=< Sep 29 10:00:53 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:00:53 crc kubenswrapper[4991]: > Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.202806 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.350499 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data\") pod \"e0901bda-8477-431d-b7a4-6c78208f5f13\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.350690 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle\") pod \"e0901bda-8477-431d-b7a4-6c78208f5f13\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.350882 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom\") pod \"e0901bda-8477-431d-b7a4-6c78208f5f13\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.350914 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz44d\" (UniqueName: \"kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d\") pod \"e0901bda-8477-431d-b7a4-6c78208f5f13\" (UID: \"e0901bda-8477-431d-b7a4-6c78208f5f13\") " Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.356086 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d" (OuterVolumeSpecName: "kube-api-access-sz44d") pod "e0901bda-8477-431d-b7a4-6c78208f5f13" (UID: "e0901bda-8477-431d-b7a4-6c78208f5f13"). InnerVolumeSpecName "kube-api-access-sz44d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.356219 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e0901bda-8477-431d-b7a4-6c78208f5f13" (UID: "e0901bda-8477-431d-b7a4-6c78208f5f13"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.398354 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0901bda-8477-431d-b7a4-6c78208f5f13" (UID: "e0901bda-8477-431d-b7a4-6c78208f5f13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.418589 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data" (OuterVolumeSpecName: "config-data") pod "e0901bda-8477-431d-b7a4-6c78208f5f13" (UID: "e0901bda-8477-431d-b7a4-6c78208f5f13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.454105 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.454140 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.454155 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz44d\" (UniqueName: \"kubernetes.io/projected/e0901bda-8477-431d-b7a4-6c78208f5f13-kube-api-access-sz44d\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.454168 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0901bda-8477-431d-b7a4-6c78208f5f13-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.811454 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pc9tx"] Sep 29 10:00:54 crc kubenswrapper[4991]: E0929 10:00:54.812276 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerName="heat-engine" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.812298 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerName="heat-engine" Sep 29 10:00:54 crc kubenswrapper[4991]: E0929 10:00:54.812340 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd62d55-78b1-4af7-867c-96f7e480c3ac" containerName="mariadb-account-create" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.812350 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd62d55-78b1-4af7-867c-96f7e480c3ac" containerName="mariadb-account-create" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.812593 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" containerName="heat-engine" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.812635 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd62d55-78b1-4af7-867c-96f7e480c3ac" containerName="mariadb-account-create" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.813619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.815772 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.815974 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.826705 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ff4ns" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.834090 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pc9tx"] Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.870845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.875250 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2wm\" (UniqueName: \"kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.875642 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.875998 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.886284 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6b855c9-q9s4l" event={"ID":"e0901bda-8477-431d-b7a4-6c78208f5f13","Type":"ContainerDied","Data":"88858ae002ca8c4305b30b90105b1dcb27c0c9473b3cfa3c2eb3db27e1197a1b"} Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.886376 4991 scope.go:117] "RemoveContainer" containerID="e6419595f72972f9cb95f36d1762e9a8ef83199b5f0b5e1d2340dfa2124af0a6" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.886684 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6b855c9-q9s4l" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.947861 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.955718 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6cf6b855c9-q9s4l"] Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.977769 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.977935 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.977979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2wm\" (UniqueName: \"kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.978020 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.984116 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.984515 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:54 crc kubenswrapper[4991]: I0929 10:00:54.986759 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.010768 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2wm\" (UniqueName: \"kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm\") pod \"nova-cell0-conductor-db-sync-pc9tx\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.138295 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.702673 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pc9tx"] Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.924712 4991 generic.go:334] "Generic (PLEG): container finished" podID="32b95627-883a-4fe0-994f-e94992e34099" containerID="d6326705f86fe7b2100732fc48c771e88ec3bfd117d1786107d4f643ebee9fa3" exitCode=0 Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.924858 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerDied","Data":"d6326705f86fe7b2100732fc48c771e88ec3bfd117d1786107d4f643ebee9fa3"} Sep 29 10:00:55 crc kubenswrapper[4991]: I0929 10:00:55.934078 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" event={"ID":"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191","Type":"ContainerStarted","Data":"996c7d6399d51b5869eedee40fbf4373b3ced4cffcf7d68e9d80f618ccbd5c4e"} Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.225851 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417201 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjn5l\" (UniqueName: \"kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417266 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417412 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417433 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417497 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417591 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.417667 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd\") pod \"32b95627-883a-4fe0-994f-e94992e34099\" (UID: \"32b95627-883a-4fe0-994f-e94992e34099\") " Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.418606 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.418998 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.423732 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts" (OuterVolumeSpecName: "scripts") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.426159 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l" (OuterVolumeSpecName: "kube-api-access-kjn5l") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "kube-api-access-kjn5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.463844 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.523415 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.523886 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.524150 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32b95627-883a-4fe0-994f-e94992e34099-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.524222 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjn5l\" (UniqueName: \"kubernetes.io/projected/32b95627-883a-4fe0-994f-e94992e34099-kube-api-access-kjn5l\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.524297 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.564400 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.627903 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.649020 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data" (OuterVolumeSpecName: "config-data") pod "32b95627-883a-4fe0-994f-e94992e34099" (UID: "32b95627-883a-4fe0-994f-e94992e34099"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.730217 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32b95627-883a-4fe0-994f-e94992e34099-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.947648 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0901bda-8477-431d-b7a4-6c78208f5f13" path="/var/lib/kubelet/pods/e0901bda-8477-431d-b7a4-6c78208f5f13/volumes" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.970256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32b95627-883a-4fe0-994f-e94992e34099","Type":"ContainerDied","Data":"7f1fa33ad95437f2a6b4e0cb4735903b1d38f954b1bc6fa3ddce5651b00dd4eb"} Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.970312 4991 scope.go:117] "RemoveContainer" containerID="784a7b3f48739c5eeee2cdcce2b77635805a0ef019e8374b357c8895eb715930" Sep 29 10:00:56 crc kubenswrapper[4991]: I0929 10:00:56.970405 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.010109 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.015388 4991 scope.go:117] "RemoveContainer" containerID="2ed374459836f12809a5e44678a4c50c1ee17f73798e64d7466822a204524dea" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.025449 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.055990 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:57 crc kubenswrapper[4991]: E0929 10:00:57.056514 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="sg-core" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056531 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="sg-core" Sep 29 10:00:57 crc kubenswrapper[4991]: E0929 10:00:57.056567 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-central-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056574 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-central-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: E0929 10:00:57.056584 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-notification-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056590 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-notification-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: E0929 10:00:57.056607 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="proxy-httpd" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056613 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="proxy-httpd" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056825 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-central-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056844 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="proxy-httpd" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056860 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="ceilometer-notification-agent" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.056878 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b95627-883a-4fe0-994f-e94992e34099" containerName="sg-core" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.066290 4991 scope.go:117] "RemoveContainer" containerID="450e4459ca57928632b3d83c65aa2996ce5520ac3ac48859cadaa74c4df9dddd" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.070557 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.080494 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.080907 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.108649 4991 scope.go:117] "RemoveContainer" containerID="d6326705f86fe7b2100732fc48c771e88ec3bfd117d1786107d4f643ebee9fa3" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.111071 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.247832 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248084 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248180 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248320 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248483 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248517 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6lg\" (UniqueName: \"kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.248627 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352102 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352534 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352592 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352597 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352696 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352745 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352780 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.352801 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6lg\" (UniqueName: \"kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.353532 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.356706 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.357083 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.357466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.359297 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.369640 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6lg\" (UniqueName: \"kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg\") pod \"ceilometer-0\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.396029 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.933817 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:00:57 crc kubenswrapper[4991]: I0929 10:00:57.998455 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerStarted","Data":"e3c6eaa6576dea892c5c3c8d0d697c59acad273d2fe347c49a62673809f2ccf1"} Sep 29 10:00:58 crc kubenswrapper[4991]: I0929 10:00:58.944934 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b95627-883a-4fe0-994f-e94992e34099" path="/var/lib/kubelet/pods/32b95627-883a-4fe0-994f-e94992e34099/volumes" Sep 29 10:00:59 crc kubenswrapper[4991]: I0929 10:00:59.014657 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerStarted","Data":"5c038f7b874a87d51e81eca41f5546341fde2f35d464bbdf24c007ec0d0efd47"} Sep 29 10:00:59 crc kubenswrapper[4991]: I0929 10:00:59.715353 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.155350 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29319001-8xkxg"] Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.157380 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.179203 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319001-8xkxg"] Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.228903 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.229276 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np225\" (UniqueName: \"kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.229505 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.229784 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.331321 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.331372 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np225\" (UniqueName: \"kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.332341 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.332446 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.340631 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.347898 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.348578 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np225\" (UniqueName: \"kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.355231 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data\") pod \"keystone-cron-29319001-8xkxg\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:00 crc kubenswrapper[4991]: I0929 10:01:00.486738 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.493990 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-xzlwh"] Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.496356 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.503220 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xzlwh"] Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.587183 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4vn8\" (UniqueName: \"kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8\") pod \"aodh-db-create-xzlwh\" (UID: \"107def10-c759-4d94-9f77-1c97aa9005ba\") " pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.688996 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4vn8\" (UniqueName: \"kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8\") pod \"aodh-db-create-xzlwh\" (UID: \"107def10-c759-4d94-9f77-1c97aa9005ba\") " pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.712968 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4vn8\" (UniqueName: \"kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8\") pod \"aodh-db-create-xzlwh\" (UID: \"107def10-c759-4d94-9f77-1c97aa9005ba\") " pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.842556 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.901783 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:01:02 crc kubenswrapper[4991]: I0929 10:01:02.968585 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:01:03 crc kubenswrapper[4991]: I0929 10:01:03.201760 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:01:04 crc kubenswrapper[4991]: I0929 10:01:04.082462 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wmrc7" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="registry-server" containerID="cri-o://b8fa6548c4758de37fc554ae8cb20d3a2a8daf4e232729eae3c8d45ba9990e31" gracePeriod=2 Sep 29 10:01:05 crc kubenswrapper[4991]: I0929 10:01:05.096612 4991 generic.go:334] "Generic (PLEG): container finished" podID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerID="b8fa6548c4758de37fc554ae8cb20d3a2a8daf4e232729eae3c8d45ba9990e31" exitCode=0 Sep 29 10:01:05 crc kubenswrapper[4991]: I0929 10:01:05.096695 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerDied","Data":"b8fa6548c4758de37fc554ae8cb20d3a2a8daf4e232729eae3c8d45ba9990e31"} Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.462687 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.616436 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities\") pod \"7c860961-6f5b-49e0-8c20-28f6f68c7740\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.617167 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content\") pod \"7c860961-6f5b-49e0-8c20-28f6f68c7740\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.617615 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r68wp\" (UniqueName: \"kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp\") pod \"7c860961-6f5b-49e0-8c20-28f6f68c7740\" (UID: \"7c860961-6f5b-49e0-8c20-28f6f68c7740\") " Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.618692 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities" (OuterVolumeSpecName: "utilities") pod "7c860961-6f5b-49e0-8c20-28f6f68c7740" (UID: "7c860961-6f5b-49e0-8c20-28f6f68c7740"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.622741 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp" (OuterVolumeSpecName: "kube-api-access-r68wp") pod "7c860961-6f5b-49e0-8c20-28f6f68c7740" (UID: "7c860961-6f5b-49e0-8c20-28f6f68c7740"). InnerVolumeSpecName "kube-api-access-r68wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.695943 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c860961-6f5b-49e0-8c20-28f6f68c7740" (UID: "7c860961-6f5b-49e0-8c20-28f6f68c7740"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.718257 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xzlwh"] Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.721303 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.721330 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c860961-6f5b-49e0-8c20-28f6f68c7740-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.721343 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r68wp\" (UniqueName: \"kubernetes.io/projected/7c860961-6f5b-49e0-8c20-28f6f68c7740-kube-api-access-r68wp\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:06 crc kubenswrapper[4991]: W0929 10:01:06.729542 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod107def10_c759_4d94_9f77_1c97aa9005ba.slice/crio-2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba WatchSource:0}: Error finding container 2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba: Status 404 returned error can't find the container with id 2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba Sep 29 10:01:06 crc kubenswrapper[4991]: I0929 10:01:06.869927 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319001-8xkxg"] Sep 29 10:01:06 crc kubenswrapper[4991]: W0929 10:01:06.893964 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02ed8488_a798_4998_8759_3f5346451268.slice/crio-1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28 WatchSource:0}: Error finding container 1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28: Status 404 returned error can't find the container with id 1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28 Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.133922 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmrc7" event={"ID":"7c860961-6f5b-49e0-8c20-28f6f68c7740","Type":"ContainerDied","Data":"65fc3b89ccd32b945e0a3f78418e3102bfb83df12f9f6232f39ce328b7949962"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.133986 4991 scope.go:117] "RemoveContainer" containerID="b8fa6548c4758de37fc554ae8cb20d3a2a8daf4e232729eae3c8d45ba9990e31" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.134118 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmrc7" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.144051 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerStarted","Data":"70a8963e42e3cd2756cf5c80c7f260f36ec2de2d15c4c9da904c58184129b4f5"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.156438 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" event={"ID":"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191","Type":"ContainerStarted","Data":"7421f5d620a108e68dfa52c47f163365876ea03c8fe5c06ef4c5194014d62475"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.159332 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xzlwh" event={"ID":"107def10-c759-4d94-9f77-1c97aa9005ba","Type":"ContainerStarted","Data":"d2013d5070a201a30f7ab351c129fca33598d9962b6b7a01b61681d70af68af2"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.159385 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xzlwh" event={"ID":"107def10-c759-4d94-9f77-1c97aa9005ba","Type":"ContainerStarted","Data":"2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.164016 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319001-8xkxg" event={"ID":"02ed8488-a798-4998-8759-3f5346451268","Type":"ContainerStarted","Data":"1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28"} Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.180310 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.216615 4991 scope.go:117] "RemoveContainer" containerID="f0586714de813b83a08f5110efab492c1b31cf92da26b916b19a10e2cc702481" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.216736 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wmrc7"] Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.230538 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" podStartSLOduration=2.6844218189999998 podStartE2EDuration="13.230515412s" podCreationTimestamp="2025-09-29 10:00:54 +0000 UTC" firstStartedPulling="2025-09-29 10:00:55.73335865 +0000 UTC m=+1391.589286688" lastFinishedPulling="2025-09-29 10:01:06.279452253 +0000 UTC m=+1402.135380281" observedRunningTime="2025-09-29 10:01:07.17600277 +0000 UTC m=+1403.031930818" watchObservedRunningTime="2025-09-29 10:01:07.230515412 +0000 UTC m=+1403.086443450" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.254469 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-xzlwh" podStartSLOduration=5.25445064 podStartE2EDuration="5.25445064s" podCreationTimestamp="2025-09-29 10:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:07.194087345 +0000 UTC m=+1403.050015363" watchObservedRunningTime="2025-09-29 10:01:07.25445064 +0000 UTC m=+1403.110378668" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.280343 4991 scope.go:117] "RemoveContainer" containerID="de1599829b5821b23c2f453202a0e424f9e9ae1375e3061261eb98dc09adddf6" Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.947200 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:01:07 crc kubenswrapper[4991]: I0929 10:01:07.947261 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.182836 4991 generic.go:334] "Generic (PLEG): container finished" podID="107def10-c759-4d94-9f77-1c97aa9005ba" containerID="d2013d5070a201a30f7ab351c129fca33598d9962b6b7a01b61681d70af68af2" exitCode=0 Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.182926 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xzlwh" event={"ID":"107def10-c759-4d94-9f77-1c97aa9005ba","Type":"ContainerDied","Data":"d2013d5070a201a30f7ab351c129fca33598d9962b6b7a01b61681d70af68af2"} Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.185470 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319001-8xkxg" event={"ID":"02ed8488-a798-4998-8759-3f5346451268","Type":"ContainerStarted","Data":"b8479c8fa5328cc43250fc67d34974e8de85e8d96322aa5b37d4d8fdcbd34464"} Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.191180 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerStarted","Data":"42e075ebb87e6d115574389960a574bb91fb31b0550a6cfe7a4a72cbdf9c5b16"} Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.225236 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29319001-8xkxg" podStartSLOduration=8.225217088 podStartE2EDuration="8.225217088s" podCreationTimestamp="2025-09-29 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:08.22037234 +0000 UTC m=+1404.076300388" watchObservedRunningTime="2025-09-29 10:01:08.225217088 +0000 UTC m=+1404.081145116" Sep 29 10:01:08 crc kubenswrapper[4991]: I0929 10:01:08.942189 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" path="/var/lib/kubelet/pods/7c860961-6f5b-49e0-8c20-28f6f68c7740/volumes" Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205200 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerStarted","Data":"6dd5541deba91ab978498a2206189db23d39710723ef5a8456562f447c089073"} Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205457 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-central-agent" containerID="cri-o://5c038f7b874a87d51e81eca41f5546341fde2f35d464bbdf24c007ec0d0efd47" gracePeriod=30 Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205562 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="proxy-httpd" containerID="cri-o://6dd5541deba91ab978498a2206189db23d39710723ef5a8456562f447c089073" gracePeriod=30 Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205589 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-notification-agent" containerID="cri-o://70a8963e42e3cd2756cf5c80c7f260f36ec2de2d15c4c9da904c58184129b4f5" gracePeriod=30 Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205636 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.205634 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="sg-core" containerID="cri-o://42e075ebb87e6d115574389960a574bb91fb31b0550a6cfe7a4a72cbdf9c5b16" gracePeriod=30 Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.234552 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.483356949 podStartE2EDuration="13.234531887s" podCreationTimestamp="2025-09-29 10:00:56 +0000 UTC" firstStartedPulling="2025-09-29 10:00:57.94730292 +0000 UTC m=+1393.803230948" lastFinishedPulling="2025-09-29 10:01:08.698477868 +0000 UTC m=+1404.554405886" observedRunningTime="2025-09-29 10:01:09.226057145 +0000 UTC m=+1405.081985173" watchObservedRunningTime="2025-09-29 10:01:09.234531887 +0000 UTC m=+1405.090459915" Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.907476 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.929772 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4vn8\" (UniqueName: \"kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8\") pod \"107def10-c759-4d94-9f77-1c97aa9005ba\" (UID: \"107def10-c759-4d94-9f77-1c97aa9005ba\") " Sep 29 10:01:09 crc kubenswrapper[4991]: I0929 10:01:09.937351 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8" (OuterVolumeSpecName: "kube-api-access-d4vn8") pod "107def10-c759-4d94-9f77-1c97aa9005ba" (UID: "107def10-c759-4d94-9f77-1c97aa9005ba"). InnerVolumeSpecName "kube-api-access-d4vn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.032487 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4vn8\" (UniqueName: \"kubernetes.io/projected/107def10-c759-4d94-9f77-1c97aa9005ba-kube-api-access-d4vn8\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.218417 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xzlwh" event={"ID":"107def10-c759-4d94-9f77-1c97aa9005ba","Type":"ContainerDied","Data":"2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.218455 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2948cd30897d05814159e73d00405c52bd45dcc8983ff87d0edd8c1a78c129ba" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.218461 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xzlwh" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.224207 4991 generic.go:334] "Generic (PLEG): container finished" podID="02ed8488-a798-4998-8759-3f5346451268" containerID="b8479c8fa5328cc43250fc67d34974e8de85e8d96322aa5b37d4d8fdcbd34464" exitCode=0 Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.224283 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319001-8xkxg" event={"ID":"02ed8488-a798-4998-8759-3f5346451268","Type":"ContainerDied","Data":"b8479c8fa5328cc43250fc67d34974e8de85e8d96322aa5b37d4d8fdcbd34464"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228221 4991 generic.go:334] "Generic (PLEG): container finished" podID="70203068-89f2-45d1-bbf8-c747d90f420e" containerID="6dd5541deba91ab978498a2206189db23d39710723ef5a8456562f447c089073" exitCode=0 Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228256 4991 generic.go:334] "Generic (PLEG): container finished" podID="70203068-89f2-45d1-bbf8-c747d90f420e" containerID="42e075ebb87e6d115574389960a574bb91fb31b0550a6cfe7a4a72cbdf9c5b16" exitCode=2 Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228268 4991 generic.go:334] "Generic (PLEG): container finished" podID="70203068-89f2-45d1-bbf8-c747d90f420e" containerID="70a8963e42e3cd2756cf5c80c7f260f36ec2de2d15c4c9da904c58184129b4f5" exitCode=0 Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228279 4991 generic.go:334] "Generic (PLEG): container finished" podID="70203068-89f2-45d1-bbf8-c747d90f420e" containerID="5c038f7b874a87d51e81eca41f5546341fde2f35d464bbdf24c007ec0d0efd47" exitCode=0 Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228305 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerDied","Data":"6dd5541deba91ab978498a2206189db23d39710723ef5a8456562f447c089073"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228333 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerDied","Data":"42e075ebb87e6d115574389960a574bb91fb31b0550a6cfe7a4a72cbdf9c5b16"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228347 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerDied","Data":"70a8963e42e3cd2756cf5c80c7f260f36ec2de2d15c4c9da904c58184129b4f5"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228359 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerDied","Data":"5c038f7b874a87d51e81eca41f5546341fde2f35d464bbdf24c007ec0d0efd47"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228371 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70203068-89f2-45d1-bbf8-c747d90f420e","Type":"ContainerDied","Data":"e3c6eaa6576dea892c5c3c8d0d697c59acad273d2fe347c49a62673809f2ccf1"} Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.228383 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c6eaa6576dea892c5c3c8d0d697c59acad273d2fe347c49a62673809f2ccf1" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.295065 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336580 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336623 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336659 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw6lg\" (UniqueName: \"kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336734 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336782 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336816 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.336843 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle\") pod \"70203068-89f2-45d1-bbf8-c747d90f420e\" (UID: \"70203068-89f2-45d1-bbf8-c747d90f420e\") " Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.337251 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.337468 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.343361 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg" (OuterVolumeSpecName: "kube-api-access-bw6lg") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "kube-api-access-bw6lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.351929 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts" (OuterVolumeSpecName: "scripts") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.380483 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.438765 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.438800 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70203068-89f2-45d1-bbf8-c747d90f420e-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.438809 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.438819 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.438829 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw6lg\" (UniqueName: \"kubernetes.io/projected/70203068-89f2-45d1-bbf8-c747d90f420e-kube-api-access-bw6lg\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.442030 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.471700 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data" (OuterVolumeSpecName: "config-data") pod "70203068-89f2-45d1-bbf8-c747d90f420e" (UID: "70203068-89f2-45d1-bbf8-c747d90f420e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.540761 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:10 crc kubenswrapper[4991]: I0929 10:01:10.540803 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70203068-89f2-45d1-bbf8-c747d90f420e-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.238418 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.273101 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.290992 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.306475 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307137 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107def10-c759-4d94-9f77-1c97aa9005ba" containerName="mariadb-database-create" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307164 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="107def10-c759-4d94-9f77-1c97aa9005ba" containerName="mariadb-database-create" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307192 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-notification-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307205 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-notification-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307217 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="sg-core" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307225 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="sg-core" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307249 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-central-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307259 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-central-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307288 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="extract-content" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307297 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="extract-content" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307305 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="proxy-httpd" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307312 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="proxy-httpd" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307331 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="extract-utilities" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307339 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="extract-utilities" Sep 29 10:01:11 crc kubenswrapper[4991]: E0929 10:01:11.307380 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="registry-server" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307389 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="registry-server" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307639 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-notification-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307653 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="sg-core" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307667 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="proxy-httpd" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307676 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c860961-6f5b-49e0-8c20-28f6f68c7740" containerName="registry-server" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307701 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" containerName="ceilometer-central-agent" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.307716 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="107def10-c759-4d94-9f77-1c97aa9005ba" containerName="mariadb-database-create" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.310065 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.313681 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.313890 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.323474 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.460005 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.460656 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.460688 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.462083 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.462238 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr2l\" (UniqueName: \"kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.462359 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.462475 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564209 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564506 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr2l\" (UniqueName: \"kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564610 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564694 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564798 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.564907 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.566507 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.567225 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.565295 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.570284 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.570320 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.574835 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.578170 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.590732 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr2l\" (UniqueName: \"kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l\") pod \"ceilometer-0\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.637857 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.809073 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.874873 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data\") pod \"02ed8488-a798-4998-8759-3f5346451268\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.874985 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np225\" (UniqueName: \"kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225\") pod \"02ed8488-a798-4998-8759-3f5346451268\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.875020 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle\") pod \"02ed8488-a798-4998-8759-3f5346451268\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.875110 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys\") pod \"02ed8488-a798-4998-8759-3f5346451268\" (UID: \"02ed8488-a798-4998-8759-3f5346451268\") " Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.880543 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225" (OuterVolumeSpecName: "kube-api-access-np225") pod "02ed8488-a798-4998-8759-3f5346451268" (UID: "02ed8488-a798-4998-8759-3f5346451268"). InnerVolumeSpecName "kube-api-access-np225". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.904851 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02ed8488-a798-4998-8759-3f5346451268" (UID: "02ed8488-a798-4998-8759-3f5346451268"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.921053 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02ed8488-a798-4998-8759-3f5346451268" (UID: "02ed8488-a798-4998-8759-3f5346451268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.949435 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data" (OuterVolumeSpecName: "config-data") pod "02ed8488-a798-4998-8759-3f5346451268" (UID: "02ed8488-a798-4998-8759-3f5346451268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.979442 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.979462 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np225\" (UniqueName: \"kubernetes.io/projected/02ed8488-a798-4998-8759-3f5346451268-kube-api-access-np225\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.979501 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:11 crc kubenswrapper[4991]: I0929 10:01:11.979533 4991 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02ed8488-a798-4998-8759-3f5346451268-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.138319 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:12 crc kubenswrapper[4991]: W0929 10:01:12.142466 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-82f2ae3bc2c95eb8bb956ea4f9ff26ef084a8f802ea375acaf035aab31a8fc6e WatchSource:0}: Error finding container 82f2ae3bc2c95eb8bb956ea4f9ff26ef084a8f802ea375acaf035aab31a8fc6e: Status 404 returned error can't find the container with id 82f2ae3bc2c95eb8bb956ea4f9ff26ef084a8f802ea375acaf035aab31a8fc6e Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.251655 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerStarted","Data":"82f2ae3bc2c95eb8bb956ea4f9ff26ef084a8f802ea375acaf035aab31a8fc6e"} Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.253734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319001-8xkxg" event={"ID":"02ed8488-a798-4998-8759-3f5346451268","Type":"ContainerDied","Data":"1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28"} Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.253787 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d166658b056d634c073a625429cb149537da1903ffa4e8c3b63283871f15f28" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.253851 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319001-8xkxg" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.944472 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70203068-89f2-45d1-bbf8-c747d90f420e" path="/var/lib/kubelet/pods/70203068-89f2-45d1-bbf8-c747d90f420e/volumes" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.970188 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:12 crc kubenswrapper[4991]: E0929 10:01:12.970682 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ed8488-a798-4998-8759-3f5346451268" containerName="keystone-cron" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.970735 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ed8488-a798-4998-8759-3f5346451268" containerName="keystone-cron" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.971000 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ed8488-a798-4998-8759-3f5346451268" containerName="keystone-cron" Sep 29 10:01:12 crc kubenswrapper[4991]: I0929 10:01:12.972505 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.026082 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.028148 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk86q\" (UniqueName: \"kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.028460 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.059767 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.131967 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.132033 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk86q\" (UniqueName: \"kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.132118 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.132499 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.132555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.151457 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk86q\" (UniqueName: \"kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q\") pod \"community-operators-9qbzx\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.269107 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerStarted","Data":"fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d"} Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.297241 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:13 crc kubenswrapper[4991]: I0929 10:01:13.764749 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:13 crc kubenswrapper[4991]: W0929 10:01:13.767711 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c6b8cb_3b04_4954_a7b6_24f9e5a2ca8a.slice/crio-5aa9f401628d23a446ec9bdfdd8afd8c3142adc44f2b2c1b57783960d4597f79 WatchSource:0}: Error finding container 5aa9f401628d23a446ec9bdfdd8afd8c3142adc44f2b2c1b57783960d4597f79: Status 404 returned error can't find the container with id 5aa9f401628d23a446ec9bdfdd8afd8c3142adc44f2b2c1b57783960d4597f79 Sep 29 10:01:14 crc kubenswrapper[4991]: I0929 10:01:14.283088 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerStarted","Data":"60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39"} Sep 29 10:01:14 crc kubenswrapper[4991]: I0929 10:01:14.284364 4991 generic.go:334] "Generic (PLEG): container finished" podID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerID="2da31d060367e5c88ec83ad23715f798dfd2af4b2fbd5cb6de5662e6c7bfd82b" exitCode=0 Sep 29 10:01:14 crc kubenswrapper[4991]: I0929 10:01:14.284403 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerDied","Data":"2da31d060367e5c88ec83ad23715f798dfd2af4b2fbd5cb6de5662e6c7bfd82b"} Sep 29 10:01:14 crc kubenswrapper[4991]: I0929 10:01:14.284431 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerStarted","Data":"5aa9f401628d23a446ec9bdfdd8afd8c3142adc44f2b2c1b57783960d4597f79"} Sep 29 10:01:15 crc kubenswrapper[4991]: I0929 10:01:15.296671 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerStarted","Data":"ff57d18c828d0c5bc8679efc2c6b494eebaad0e41d9336593017ce51e349f37c"} Sep 29 10:01:15 crc kubenswrapper[4991]: I0929 10:01:15.298828 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerStarted","Data":"c907d497a86281576d2f6ae656b4432461495c5652a8d24ca7bcbc4bf2446b42"} Sep 29 10:01:16 crc kubenswrapper[4991]: I0929 10:01:16.323468 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerStarted","Data":"da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80"} Sep 29 10:01:16 crc kubenswrapper[4991]: I0929 10:01:16.323917 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:01:16 crc kubenswrapper[4991]: I0929 10:01:16.365341 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.988202027 podStartE2EDuration="5.365322787s" podCreationTimestamp="2025-09-29 10:01:11 +0000 UTC" firstStartedPulling="2025-09-29 10:01:12.145061242 +0000 UTC m=+1408.000989270" lastFinishedPulling="2025-09-29 10:01:15.522182002 +0000 UTC m=+1411.378110030" observedRunningTime="2025-09-29 10:01:16.353974509 +0000 UTC m=+1412.209902537" watchObservedRunningTime="2025-09-29 10:01:16.365322787 +0000 UTC m=+1412.221250815" Sep 29 10:01:17 crc kubenswrapper[4991]: I0929 10:01:17.338144 4991 generic.go:334] "Generic (PLEG): container finished" podID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerID="c907d497a86281576d2f6ae656b4432461495c5652a8d24ca7bcbc4bf2446b42" exitCode=0 Sep 29 10:01:17 crc kubenswrapper[4991]: I0929 10:01:17.338283 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerDied","Data":"c907d497a86281576d2f6ae656b4432461495c5652a8d24ca7bcbc4bf2446b42"} Sep 29 10:01:18 crc kubenswrapper[4991]: I0929 10:01:18.359648 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerStarted","Data":"26a6d40864e2da871356206e013536959106dba43afc5d73d15d5b02ba87318c"} Sep 29 10:01:18 crc kubenswrapper[4991]: I0929 10:01:18.378785 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qbzx" podStartSLOduration=2.56346296 podStartE2EDuration="6.378760269s" podCreationTimestamp="2025-09-29 10:01:12 +0000 UTC" firstStartedPulling="2025-09-29 10:01:14.286087745 +0000 UTC m=+1410.142015773" lastFinishedPulling="2025-09-29 10:01:18.101385054 +0000 UTC m=+1413.957313082" observedRunningTime="2025-09-29 10:01:18.376470529 +0000 UTC m=+1414.232398567" watchObservedRunningTime="2025-09-29 10:01:18.378760269 +0000 UTC m=+1414.234688297" Sep 29 10:01:20 crc kubenswrapper[4991]: I0929 10:01:20.382477 4991 generic.go:334] "Generic (PLEG): container finished" podID="491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" containerID="7421f5d620a108e68dfa52c47f163365876ea03c8fe5c06ef4c5194014d62475" exitCode=0 Sep 29 10:01:20 crc kubenswrapper[4991]: I0929 10:01:20.382592 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" event={"ID":"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191","Type":"ContainerDied","Data":"7421f5d620a108e68dfa52c47f163365876ea03c8fe5c06ef4c5194014d62475"} Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.914099 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.961445 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr2wm\" (UniqueName: \"kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm\") pod \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.961579 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle\") pod \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.961612 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts\") pod \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.961676 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data\") pod \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\" (UID: \"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191\") " Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.972226 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm" (OuterVolumeSpecName: "kube-api-access-zr2wm") pod "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" (UID: "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191"). InnerVolumeSpecName "kube-api-access-zr2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.974435 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts" (OuterVolumeSpecName: "scripts") pod "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" (UID: "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:21 crc kubenswrapper[4991]: I0929 10:01:21.999501 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" (UID: "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.012146 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data" (OuterVolumeSpecName: "config-data") pod "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" (UID: "491bdd73-b7c6-4b9b-a875-aa8f1ba5a191"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.066652 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr2wm\" (UniqueName: \"kubernetes.io/projected/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-kube-api-access-zr2wm\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.066683 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.066693 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.066701 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.412504 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" event={"ID":"491bdd73-b7c6-4b9b-a875-aa8f1ba5a191","Type":"ContainerDied","Data":"996c7d6399d51b5869eedee40fbf4373b3ced4cffcf7d68e9d80f618ccbd5c4e"} Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.412582 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pc9tx" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.412582 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996c7d6399d51b5869eedee40fbf4373b3ced4cffcf7d68e9d80f618ccbd5c4e" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.517106 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 29 10:01:22 crc kubenswrapper[4991]: E0929 10:01:22.517639 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" containerName="nova-cell0-conductor-db-sync" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.517657 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" containerName="nova-cell0-conductor-db-sync" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.517942 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" containerName="nova-cell0-conductor-db-sync" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.518850 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.520795 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ff4ns" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.523506 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.529792 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-6b11-account-create-jpx6j"] Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.534647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.538567 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.551020 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6b11-account-create-jpx6j"] Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.574796 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.579305 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.579550 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98pn\" (UniqueName: \"kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn\") pod \"aodh-6b11-account-create-jpx6j\" (UID: \"a2146a65-487e-4c07-b953-5dbc2c490f3b\") " pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.579580 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.579603 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4t2\" (UniqueName: \"kubernetes.io/projected/28e75f85-bdef-4ae6-9eca-a4bdc403c679-kube-api-access-9c4t2\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.683135 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98pn\" (UniqueName: \"kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn\") pod \"aodh-6b11-account-create-jpx6j\" (UID: \"a2146a65-487e-4c07-b953-5dbc2c490f3b\") " pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.683213 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.683455 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c4t2\" (UniqueName: \"kubernetes.io/projected/28e75f85-bdef-4ae6-9eca-a4bdc403c679-kube-api-access-9c4t2\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.683507 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.713835 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.718500 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e75f85-bdef-4ae6-9eca-a4bdc403c679-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.736935 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98pn\" (UniqueName: \"kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn\") pod \"aodh-6b11-account-create-jpx6j\" (UID: \"a2146a65-487e-4c07-b953-5dbc2c490f3b\") " pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.753423 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c4t2\" (UniqueName: \"kubernetes.io/projected/28e75f85-bdef-4ae6-9eca-a4bdc403c679-kube-api-access-9c4t2\") pod \"nova-cell0-conductor-0\" (UID: \"28e75f85-bdef-4ae6-9eca-a4bdc403c679\") " pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.840921 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:22 crc kubenswrapper[4991]: I0929 10:01:22.856980 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.297745 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.299630 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.342016 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.371734 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.426116 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"28e75f85-bdef-4ae6-9eca-a4bdc403c679","Type":"ContainerStarted","Data":"96c3fc8776b12c04c5b255fe5620cbd8ef24f6d0c9b807c290e2d81603a2004c"} Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.516006 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.532650 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6b11-account-create-jpx6j"] Sep 29 10:01:23 crc kubenswrapper[4991]: I0929 10:01:23.609198 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.438572 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"28e75f85-bdef-4ae6-9eca-a4bdc403c679","Type":"ContainerStarted","Data":"ea4dfcb3dcaee96cf87ff1617b90ccd07485880f4957813048d57c0fae840ad3"} Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.438967 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.440920 4991 generic.go:334] "Generic (PLEG): container finished" podID="a2146a65-487e-4c07-b953-5dbc2c490f3b" containerID="84d8dee0a2b95a074eed7e0a8bc40cca1ec42c37bf8aab8e4623c090b1a03b9d" exitCode=0 Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.441002 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6b11-account-create-jpx6j" event={"ID":"a2146a65-487e-4c07-b953-5dbc2c490f3b","Type":"ContainerDied","Data":"84d8dee0a2b95a074eed7e0a8bc40cca1ec42c37bf8aab8e4623c090b1a03b9d"} Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.441050 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6b11-account-create-jpx6j" event={"ID":"a2146a65-487e-4c07-b953-5dbc2c490f3b","Type":"ContainerStarted","Data":"2297c83ce4d72b799e92c869138d910f0f0d68803d637f0f0c2bbd84d110871f"} Sep 29 10:01:24 crc kubenswrapper[4991]: I0929 10:01:24.457165 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.457149407 podStartE2EDuration="2.457149407s" podCreationTimestamp="2025-09-29 10:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:24.455940865 +0000 UTC m=+1420.311868903" watchObservedRunningTime="2025-09-29 10:01:24.457149407 +0000 UTC m=+1420.313077435" Sep 29 10:01:25 crc kubenswrapper[4991]: I0929 10:01:25.455614 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9qbzx" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="registry-server" containerID="cri-o://26a6d40864e2da871356206e013536959106dba43afc5d73d15d5b02ba87318c" gracePeriod=2 Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.025271 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.117811 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x98pn\" (UniqueName: \"kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn\") pod \"a2146a65-487e-4c07-b953-5dbc2c490f3b\" (UID: \"a2146a65-487e-4c07-b953-5dbc2c490f3b\") " Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.126399 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn" (OuterVolumeSpecName: "kube-api-access-x98pn") pod "a2146a65-487e-4c07-b953-5dbc2c490f3b" (UID: "a2146a65-487e-4c07-b953-5dbc2c490f3b"). InnerVolumeSpecName "kube-api-access-x98pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.219890 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x98pn\" (UniqueName: \"kubernetes.io/projected/a2146a65-487e-4c07-b953-5dbc2c490f3b-kube-api-access-x98pn\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.475109 4991 generic.go:334] "Generic (PLEG): container finished" podID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerID="26a6d40864e2da871356206e013536959106dba43afc5d73d15d5b02ba87318c" exitCode=0 Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.475194 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerDied","Data":"26a6d40864e2da871356206e013536959106dba43afc5d73d15d5b02ba87318c"} Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.477568 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6b11-account-create-jpx6j" event={"ID":"a2146a65-487e-4c07-b953-5dbc2c490f3b","Type":"ContainerDied","Data":"2297c83ce4d72b799e92c869138d910f0f0d68803d637f0f0c2bbd84d110871f"} Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.477600 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2297c83ce4d72b799e92c869138d910f0f0d68803d637f0f0c2bbd84d110871f" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.477644 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6b11-account-create-jpx6j" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.727139 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.870172 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content\") pod \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.870255 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities\") pod \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.870587 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk86q\" (UniqueName: \"kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q\") pod \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\" (UID: \"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a\") " Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.872451 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities" (OuterVolumeSpecName: "utilities") pod "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" (UID: "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.878432 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q" (OuterVolumeSpecName: "kube-api-access-bk86q") pod "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" (UID: "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a"). InnerVolumeSpecName "kube-api-access-bk86q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.925981 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" (UID: "a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.973430 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk86q\" (UniqueName: \"kubernetes.io/projected/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-kube-api-access-bk86q\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.973462 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:26 crc kubenswrapper[4991]: I0929 10:01:26.973471 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.489586 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qbzx" event={"ID":"a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a","Type":"ContainerDied","Data":"5aa9f401628d23a446ec9bdfdd8afd8c3142adc44f2b2c1b57783960d4597f79"} Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.489634 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qbzx" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.489875 4991 scope.go:117] "RemoveContainer" containerID="26a6d40864e2da871356206e013536959106dba43afc5d73d15d5b02ba87318c" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.516363 4991 scope.go:117] "RemoveContainer" containerID="c907d497a86281576d2f6ae656b4432461495c5652a8d24ca7bcbc4bf2446b42" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.522693 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.538779 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9qbzx"] Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.545371 4991 scope.go:117] "RemoveContainer" containerID="2da31d060367e5c88ec83ad23715f798dfd2af4b2fbd5cb6de5662e6c7bfd82b" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.843859 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-j7ntb"] Sep 29 10:01:27 crc kubenswrapper[4991]: E0929 10:01:27.844310 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="extract-utilities" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844330 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="extract-utilities" Sep 29 10:01:27 crc kubenswrapper[4991]: E0929 10:01:27.844358 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="registry-server" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844365 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="registry-server" Sep 29 10:01:27 crc kubenswrapper[4991]: E0929 10:01:27.844378 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="extract-content" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844384 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="extract-content" Sep 29 10:01:27 crc kubenswrapper[4991]: E0929 10:01:27.844395 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2146a65-487e-4c07-b953-5dbc2c490f3b" containerName="mariadb-account-create" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844403 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2146a65-487e-4c07-b953-5dbc2c490f3b" containerName="mariadb-account-create" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844586 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" containerName="registry-server" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.844610 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2146a65-487e-4c07-b953-5dbc2c490f3b" containerName="mariadb-account-create" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.845333 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.852127 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qxcgx" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.852537 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.853370 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.863012 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j7ntb"] Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.994788 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.995206 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.995378 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:27 crc kubenswrapper[4991]: I0929 10:01:27.995541 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74g67\" (UniqueName: \"kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.098082 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74g67\" (UniqueName: \"kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.098510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.098591 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.098664 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.104250 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.104568 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.110946 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.122811 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74g67\" (UniqueName: \"kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67\") pod \"aodh-db-sync-j7ntb\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.168191 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.671583 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-j7ntb"] Sep 29 10:01:28 crc kubenswrapper[4991]: I0929 10:01:28.943246 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a" path="/var/lib/kubelet/pods/a7c6b8cb-3b04-4954-a7b6-24f9e5a2ca8a/volumes" Sep 29 10:01:29 crc kubenswrapper[4991]: I0929 10:01:29.513701 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j7ntb" event={"ID":"acacb281-5dfd-4e31-b340-a8f6a7950f9b","Type":"ContainerStarted","Data":"2bf6430f1ad6ce7105006291afb76431e4d483ec29bfe279917e3676d1349eaf"} Sep 29 10:01:32 crc kubenswrapper[4991]: I0929 10:01:32.877152 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.373077 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7qnxc"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.374983 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.377748 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.377982 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.398755 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qnxc"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.480203 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.481050 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwrp\" (UniqueName: \"kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.481309 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.481602 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.545358 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.547610 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.551441 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.565587 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.585384 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.585462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwrp\" (UniqueName: \"kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.585551 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.585629 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.593058 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.596436 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.600304 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.613589 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwrp\" (UniqueName: \"kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp\") pod \"nova-cell0-cell-mapping-7qnxc\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.685686 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.699633 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdp7\" (UniqueName: \"kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.699691 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.699856 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.699923 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.700597 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.705357 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.705989 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.735790 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.781551 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.783595 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.787722 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.804162 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdp7\" (UniqueName: \"kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.804637 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.804842 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.804914 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.805479 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.815035 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.828452 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.842837 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.873476 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.874239 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdp7\" (UniqueName: \"kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7\") pod \"nova-api-0\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.875920 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.886014 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.886868 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.907726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.907820 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzb6g\" (UniqueName: \"kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.907874 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.907974 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdks\" (UniqueName: \"kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.908009 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.908060 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.908200 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.911769 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.913466 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.939541 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 29 10:01:33 crc kubenswrapper[4991]: I0929 10:01:33.964686 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.009857 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.009918 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6zh\" (UniqueName: \"kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.009961 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.009990 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010012 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010030 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrh2\" (UniqueName: \"kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010056 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzb6g\" (UniqueName: \"kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010087 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010137 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdks\" (UniqueName: \"kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010155 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010190 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010210 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010231 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010260 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010314 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.010329 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.014555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.027789 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.028764 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.047349 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.053519 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdks\" (UniqueName: \"kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.059831 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data\") pod \"nova-scheduler-0\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.106686 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzb6g\" (UniqueName: \"kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g\") pod \"nova-metadata-0\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128569 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128642 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128670 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrh2\" (UniqueName: \"kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128822 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128854 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128893 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.128979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.129065 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.129125 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6zh\" (UniqueName: \"kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.129581 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.130350 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.130960 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.131176 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.133503 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.149853 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.152165 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.164500 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6zh\" (UniqueName: \"kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh\") pod \"nova-cell1-novncproxy-0\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.167192 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrh2\" (UniqueName: \"kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2\") pod \"dnsmasq-dns-7877d89589-hz6lm\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.315037 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.374501 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.379781 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.425713 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.974008 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8q4rd"] Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.977280 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.981708 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.981940 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Sep 29 10:01:34 crc kubenswrapper[4991]: I0929 10:01:34.987276 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8q4rd"] Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.155618 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.156008 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.156223 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.156275 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4r4\" (UniqueName: \"kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.269118 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.269214 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4r4\" (UniqueName: \"kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.269457 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.269515 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.284632 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.287761 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.303707 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.310148 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4r4\" (UniqueName: \"kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4\") pod \"nova-cell1-conductor-db-sync-8q4rd\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.333462 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.369987 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.609977 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j7ntb" event={"ID":"acacb281-5dfd-4e31-b340-a8f6a7950f9b","Type":"ContainerStarted","Data":"8b4ca2db53e3860779be768a0e5561a21a9ae21ce452ca34b73fc182038a5f4b"} Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.613312 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4","Type":"ContainerStarted","Data":"ec63e14cafa0916924d673f410b79dc0809726867dabbae96dc9bcf388aa3dc2"} Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.645996 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-j7ntb" podStartSLOduration=2.474077477 podStartE2EDuration="8.645972321s" podCreationTimestamp="2025-09-29 10:01:27 +0000 UTC" firstStartedPulling="2025-09-29 10:01:28.686340567 +0000 UTC m=+1424.542268615" lastFinishedPulling="2025-09-29 10:01:34.858235431 +0000 UTC m=+1430.714163459" observedRunningTime="2025-09-29 10:01:35.644936163 +0000 UTC m=+1431.500864191" watchObservedRunningTime="2025-09-29 10:01:35.645972321 +0000 UTC m=+1431.501900359" Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.865051 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qnxc"] Sep 29 10:01:35 crc kubenswrapper[4991]: W0929 10:01:35.870737 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a WatchSource:0}: Error finding container fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a: Status 404 returned error can't find the container with id fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.891325 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:01:35 crc kubenswrapper[4991]: W0929 10:01:35.893396 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-2e9046474c441c3790df0271ab48df25d0896c0b864f9d0e53c3b97f0f7945e0 WatchSource:0}: Error finding container 2e9046474c441c3790df0271ab48df25d0896c0b864f9d0e53c3b97f0f7945e0: Status 404 returned error can't find the container with id 2e9046474c441c3790df0271ab48df25d0896c0b864f9d0e53c3b97f0f7945e0 Sep 29 10:01:35 crc kubenswrapper[4991]: I0929 10:01:35.913001 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.169262 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.188868 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.276462 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8q4rd"] Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.644281 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9282497b-f696-4547-8dc6-11f5a0c867ab","Type":"ContainerStarted","Data":"f3ddd89d44f47e6d0c72d91076d20d49e2fc33865795edcf8e3a2ac51a9273b1"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.650794 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerStarted","Data":"2e9046474c441c3790df0271ab48df25d0896c0b864f9d0e53c3b97f0f7945e0"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.655147 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerStarted","Data":"9d638d0d8bd528e9c55054eed1fd45b5573abffa170e92a6f225bec9e090f988"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.665050 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" event={"ID":"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0","Type":"ContainerStarted","Data":"c9c80948b918aed7a515ae57b9ef5fbe7e8f4f04e9debba67828554afac3fcc4"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.669307 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qnxc" event={"ID":"25df2bc4-583e-4075-9ed2-49128b9b8d2f","Type":"ContainerStarted","Data":"75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.669360 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qnxc" event={"ID":"25df2bc4-583e-4075-9ed2-49128b9b8d2f","Type":"ContainerStarted","Data":"fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.672932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" event={"ID":"9f93a627-850e-4991-aa3f-82372989186d","Type":"ContainerStarted","Data":"8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.672980 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" event={"ID":"9f93a627-850e-4991-aa3f-82372989186d","Type":"ContainerStarted","Data":"a97c59be76a7f218296806c6581e6efd3b9a54d4b51473455023cd6a6f90a0a9"} Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.700444 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7qnxc" podStartSLOduration=3.700420956 podStartE2EDuration="3.700420956s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:36.692737194 +0000 UTC m=+1432.548665222" watchObservedRunningTime="2025-09-29 10:01:36.700420956 +0000 UTC m=+1432.556348984" Sep 29 10:01:36 crc kubenswrapper[4991]: I0929 10:01:36.760785 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" podStartSLOduration=2.760763121 podStartE2EDuration="2.760763121s" podCreationTimestamp="2025-09-29 10:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:36.713715775 +0000 UTC m=+1432.569643803" watchObservedRunningTime="2025-09-29 10:01:36.760763121 +0000 UTC m=+1432.616691149" Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.692375 4991 generic.go:334] "Generic (PLEG): container finished" podID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerID="8bfd82ba0a59c654ada289b94c740fe9e31c0f28fb3c6aac1407dc74ec5554a2" exitCode=0 Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.692447 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" event={"ID":"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0","Type":"ContainerDied","Data":"8bfd82ba0a59c654ada289b94c740fe9e31c0f28fb3c6aac1407dc74ec5554a2"} Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.729344 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.790196 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.946909 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:01:37 crc kubenswrapper[4991]: I0929 10:01:37.947330 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:01:38 crc kubenswrapper[4991]: I0929 10:01:38.721606 4991 generic.go:334] "Generic (PLEG): container finished" podID="acacb281-5dfd-4e31-b340-a8f6a7950f9b" containerID="8b4ca2db53e3860779be768a0e5561a21a9ae21ce452ca34b73fc182038a5f4b" exitCode=0 Sep 29 10:01:38 crc kubenswrapper[4991]: I0929 10:01:38.721858 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j7ntb" event={"ID":"acacb281-5dfd-4e31-b340-a8f6a7950f9b","Type":"ContainerDied","Data":"8b4ca2db53e3860779be768a0e5561a21a9ae21ce452ca34b73fc182038a5f4b"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.170412 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.320309 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74g67\" (UniqueName: \"kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67\") pod \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.320402 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts\") pod \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.320454 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle\") pod \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.320527 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data\") pod \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\" (UID: \"acacb281-5dfd-4e31-b340-a8f6a7950f9b\") " Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.325304 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts" (OuterVolumeSpecName: "scripts") pod "acacb281-5dfd-4e31-b340-a8f6a7950f9b" (UID: "acacb281-5dfd-4e31-b340-a8f6a7950f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.325907 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67" (OuterVolumeSpecName: "kube-api-access-74g67") pod "acacb281-5dfd-4e31-b340-a8f6a7950f9b" (UID: "acacb281-5dfd-4e31-b340-a8f6a7950f9b"). InnerVolumeSpecName "kube-api-access-74g67". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.355054 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acacb281-5dfd-4e31-b340-a8f6a7950f9b" (UID: "acacb281-5dfd-4e31-b340-a8f6a7950f9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.363411 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data" (OuterVolumeSpecName: "config-data") pod "acacb281-5dfd-4e31-b340-a8f6a7950f9b" (UID: "acacb281-5dfd-4e31-b340-a8f6a7950f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.422926 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74g67\" (UniqueName: \"kubernetes.io/projected/acacb281-5dfd-4e31-b340-a8f6a7950f9b-kube-api-access-74g67\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.423176 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.423236 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.423300 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acacb281-5dfd-4e31-b340-a8f6a7950f9b-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.744385 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4","Type":"ContainerStarted","Data":"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.749696 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerStarted","Data":"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.749746 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerStarted","Data":"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.749877 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-log" containerID="cri-o://6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" gracePeriod=30 Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.750183 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-metadata" containerID="cri-o://095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" gracePeriod=30 Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.753794 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" event={"ID":"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0","Type":"ContainerStarted","Data":"e331abf6d6040bd342ea69da8808cc724b420128a2423add4567e36145835da1"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.754825 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.756699 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9282497b-f696-4547-8dc6-11f5a0c867ab","Type":"ContainerStarted","Data":"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.756828 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9282497b-f696-4547-8dc6-11f5a0c867ab" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb" gracePeriod=30 Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.776860 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.754445475 podStartE2EDuration="7.776837672s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="2025-09-29 10:01:35.402153777 +0000 UTC m=+1431.258081805" lastFinishedPulling="2025-09-29 10:01:39.424545974 +0000 UTC m=+1435.280474002" observedRunningTime="2025-09-29 10:01:40.764094408 +0000 UTC m=+1436.620022466" watchObservedRunningTime="2025-09-29 10:01:40.776837672 +0000 UTC m=+1436.632765710" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.780118 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerStarted","Data":"4ba5f6a2709c35db22f25f02c77f300ad42e71bb86d725d4f3eecd4e5690ad0b"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.780194 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerStarted","Data":"25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.803619 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-j7ntb" event={"ID":"acacb281-5dfd-4e31-b340-a8f6a7950f9b","Type":"ContainerDied","Data":"2bf6430f1ad6ce7105006291afb76431e4d483ec29bfe279917e3676d1349eaf"} Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.803672 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf6430f1ad6ce7105006291afb76431e4d483ec29bfe279917e3676d1349eaf" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.803762 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-j7ntb" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.817748 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.575937681 podStartE2EDuration="7.817732306s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="2025-09-29 10:01:36.190364349 +0000 UTC m=+1432.046292387" lastFinishedPulling="2025-09-29 10:01:39.432158984 +0000 UTC m=+1435.288087012" observedRunningTime="2025-09-29 10:01:40.792155255 +0000 UTC m=+1436.648083283" watchObservedRunningTime="2025-09-29 10:01:40.817732306 +0000 UTC m=+1436.673660334" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.840694 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" podStartSLOduration=7.840676909 podStartE2EDuration="7.840676909s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:40.809385167 +0000 UTC m=+1436.665313205" watchObservedRunningTime="2025-09-29 10:01:40.840676909 +0000 UTC m=+1436.696604937" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.845117 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.309603395 podStartE2EDuration="7.845100775s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="2025-09-29 10:01:35.889591769 +0000 UTC m=+1431.745519797" lastFinishedPulling="2025-09-29 10:01:39.425089139 +0000 UTC m=+1435.281017177" observedRunningTime="2025-09-29 10:01:40.823640012 +0000 UTC m=+1436.679568060" watchObservedRunningTime="2025-09-29 10:01:40.845100775 +0000 UTC m=+1436.701028803" Sep 29 10:01:40 crc kubenswrapper[4991]: I0929 10:01:40.869711 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.33298183 podStartE2EDuration="7.869690101s" podCreationTimestamp="2025-09-29 10:01:33 +0000 UTC" firstStartedPulling="2025-09-29 10:01:35.901200814 +0000 UTC m=+1431.757128842" lastFinishedPulling="2025-09-29 10:01:39.437909085 +0000 UTC m=+1435.293837113" observedRunningTime="2025-09-29 10:01:40.854569374 +0000 UTC m=+1436.710497422" watchObservedRunningTime="2025-09-29 10:01:40.869690101 +0000 UTC m=+1436.725618129" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.701880 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.775877 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.815415 4991 generic.go:334] "Generic (PLEG): container finished" podID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerID="095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" exitCode=0 Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.815459 4991 generic.go:334] "Generic (PLEG): container finished" podID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerID="6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" exitCode=143 Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.816624 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.817089 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerDied","Data":"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525"} Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.817135 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerDied","Data":"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea"} Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.817149 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75da4b82-d9cb-4e38-a59c-938461a0ae0a","Type":"ContainerDied","Data":"9d638d0d8bd528e9c55054eed1fd45b5573abffa170e92a6f225bec9e090f988"} Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.817165 4991 scope.go:117] "RemoveContainer" containerID="095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.865010 4991 scope.go:117] "RemoveContainer" containerID="6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.873178 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle\") pod \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.873305 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs\") pod \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.873391 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzb6g\" (UniqueName: \"kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g\") pod \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.873424 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data\") pod \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\" (UID: \"75da4b82-d9cb-4e38-a59c-938461a0ae0a\") " Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.874387 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs" (OuterVolumeSpecName: "logs") pod "75da4b82-d9cb-4e38-a59c-938461a0ae0a" (UID: "75da4b82-d9cb-4e38-a59c-938461a0ae0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.875381 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75da4b82-d9cb-4e38-a59c-938461a0ae0a-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.879605 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g" (OuterVolumeSpecName: "kube-api-access-tzb6g") pod "75da4b82-d9cb-4e38-a59c-938461a0ae0a" (UID: "75da4b82-d9cb-4e38-a59c-938461a0ae0a"). InnerVolumeSpecName "kube-api-access-tzb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.887226 4991 scope.go:117] "RemoveContainer" containerID="095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" Sep 29 10:01:41 crc kubenswrapper[4991]: E0929 10:01:41.889385 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525\": container with ID starting with 095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525 not found: ID does not exist" containerID="095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.889439 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525"} err="failed to get container status \"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525\": rpc error: code = NotFound desc = could not find container \"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525\": container with ID starting with 095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525 not found: ID does not exist" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.889600 4991 scope.go:117] "RemoveContainer" containerID="6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" Sep 29 10:01:41 crc kubenswrapper[4991]: E0929 10:01:41.892270 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea\": container with ID starting with 6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea not found: ID does not exist" containerID="6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.892336 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea"} err="failed to get container status \"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea\": rpc error: code = NotFound desc = could not find container \"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea\": container with ID starting with 6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea not found: ID does not exist" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.892363 4991 scope.go:117] "RemoveContainer" containerID="095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.893165 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525"} err="failed to get container status \"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525\": rpc error: code = NotFound desc = could not find container \"095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525\": container with ID starting with 095a6198421e954758d70eed6e6426de05030f85c0ef4a31a85eb5600bb3f525 not found: ID does not exist" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.893185 4991 scope.go:117] "RemoveContainer" containerID="6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.896153 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea"} err="failed to get container status \"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea\": rpc error: code = NotFound desc = could not find container \"6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea\": container with ID starting with 6cf22130815496c7514a3260eeaeb592aefa8efbbc57e50e0a7f36697416efea not found: ID does not exist" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.913629 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data" (OuterVolumeSpecName: "config-data") pod "75da4b82-d9cb-4e38-a59c-938461a0ae0a" (UID: "75da4b82-d9cb-4e38-a59c-938461a0ae0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.913763 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75da4b82-d9cb-4e38-a59c-938461a0ae0a" (UID: "75da4b82-d9cb-4e38-a59c-938461a0ae0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.977975 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.978014 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzb6g\" (UniqueName: \"kubernetes.io/projected/75da4b82-d9cb-4e38-a59c-938461a0ae0a-kube-api-access-tzb6g\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:41 crc kubenswrapper[4991]: I0929 10:01:41.978025 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75da4b82-d9cb-4e38-a59c-938461a0ae0a-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.189538 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.202121 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.214134 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: E0929 10:01:42.214822 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acacb281-5dfd-4e31-b340-a8f6a7950f9b" containerName="aodh-db-sync" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.214847 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="acacb281-5dfd-4e31-b340-a8f6a7950f9b" containerName="aodh-db-sync" Sep 29 10:01:42 crc kubenswrapper[4991]: E0929 10:01:42.214871 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-metadata" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.214880 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-metadata" Sep 29 10:01:42 crc kubenswrapper[4991]: E0929 10:01:42.214901 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-log" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.214909 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-log" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.215249 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-metadata" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.215275 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" containerName="nova-metadata-log" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.215298 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="acacb281-5dfd-4e31-b340-a8f6a7950f9b" containerName="aodh-db-sync" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.216913 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.223648 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.223923 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.235056 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.285155 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.285252 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.285316 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.285429 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfqq\" (UniqueName: \"kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.285456 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.387126 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.387478 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.387685 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.387889 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.388018 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfqq\" (UniqueName: \"kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.388138 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.392008 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.392115 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.400640 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.414179 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfqq\" (UniqueName: \"kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq\") pod \"nova-metadata-0\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.515855 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.528486 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.533163 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.533209 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qxcgx" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.533456 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.536536 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.537103 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.593048 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rr65\" (UniqueName: \"kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.593343 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.593509 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.593778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.697546 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rr65\" (UniqueName: \"kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.697862 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.698052 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.699174 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.704198 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.704203 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.704209 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.720079 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rr65\" (UniqueName: \"kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65\") pod \"aodh-0\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.864030 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:01:42 crc kubenswrapper[4991]: I0929 10:01:42.981739 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75da4b82-d9cb-4e38-a59c-938461a0ae0a" path="/var/lib/kubelet/pods/75da4b82-d9cb-4e38-a59c-938461a0ae0a/volumes" Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.225900 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:43 crc kubenswrapper[4991]: W0929 10:01:43.230138 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-cd5405477101b5780ffab9e0e2c0eb5631069ca5560ddc52e9f5b8460e994065 WatchSource:0}: Error finding container cd5405477101b5780ffab9e0e2c0eb5631069ca5560ddc52e9f5b8460e994065: Status 404 returned error can't find the container with id cd5405477101b5780ffab9e0e2c0eb5631069ca5560ddc52e9f5b8460e994065 Sep 29 10:01:43 crc kubenswrapper[4991]: W0929 10:01:43.789182 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae1f4235_0ad5_4d8c_b84e_ebcbea326284.slice/crio-293bc4a537d829532d3da2fc06dc95dd49606c8745650c6ed21ff1630fda1114 WatchSource:0}: Error finding container 293bc4a537d829532d3da2fc06dc95dd49606c8745650c6ed21ff1630fda1114: Status 404 returned error can't find the container with id 293bc4a537d829532d3da2fc06dc95dd49606c8745650c6ed21ff1630fda1114 Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.790657 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.887211 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.887516 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerStarted","Data":"9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649"} Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.887542 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerStarted","Data":"cd5405477101b5780ffab9e0e2c0eb5631069ca5560ddc52e9f5b8460e994065"} Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.887559 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:01:43 crc kubenswrapper[4991]: I0929 10:01:43.889339 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerStarted","Data":"293bc4a537d829532d3da2fc06dc95dd49606c8745650c6ed21ff1630fda1114"} Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.315399 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.315435 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.379165 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.426682 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.439067 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.527876 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.528283 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="dnsmasq-dns" containerID="cri-o://06e50afa93d73c3d2c825326d8901994bf8885edfa459c9a32c5fbb8e47cbc84" gracePeriod=10 Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.973405 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.973879 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.987802 4991 generic.go:334] "Generic (PLEG): container finished" podID="650279b3-4add-4845-b440-bde55c40efb7" containerID="06e50afa93d73c3d2c825326d8901994bf8885edfa459c9a32c5fbb8e47cbc84" exitCode=0 Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.997054 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerStarted","Data":"697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e"} Sep 29 10:01:44 crc kubenswrapper[4991]: I0929 10:01:44.997105 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" event={"ID":"650279b3-4add-4845-b440-bde55c40efb7","Type":"ContainerDied","Data":"06e50afa93d73c3d2c825326d8901994bf8885edfa459c9a32c5fbb8e47cbc84"} Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.116216 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.127042 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.127422 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-central-agent" containerID="cri-o://fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d" gracePeriod=30 Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.127562 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="sg-core" containerID="cri-o://ff57d18c828d0c5bc8679efc2c6b494eebaad0e41d9336593017ce51e349f37c" gracePeriod=30 Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.127600 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-notification-agent" containerID="cri-o://60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39" gracePeriod=30 Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.127590 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="proxy-httpd" containerID="cri-o://da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80" gracePeriod=30 Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.201651 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.201634199 podStartE2EDuration="3.201634199s" podCreationTimestamp="2025-09-29 10:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:45.117417127 +0000 UTC m=+1440.973345155" watchObservedRunningTime="2025-09-29 10:01:45.201634199 +0000 UTC m=+1441.057562227" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.354642 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464005 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464083 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464199 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdswz\" (UniqueName: \"kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464221 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464293 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.464402 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config\") pod \"650279b3-4add-4845-b440-bde55c40efb7\" (UID: \"650279b3-4add-4845-b440-bde55c40efb7\") " Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.473573 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz" (OuterVolumeSpecName: "kube-api-access-xdswz") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "kube-api-access-xdswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.548716 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.569877 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.569915 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdswz\" (UniqueName: \"kubernetes.io/projected/650279b3-4add-4845-b440-bde55c40efb7-kube-api-access-xdswz\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.570921 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config" (OuterVolumeSpecName: "config") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.573067 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.573395 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.585907 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "650279b3-4add-4845-b440-bde55c40efb7" (UID: "650279b3-4add-4845-b440-bde55c40efb7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.675917 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.675963 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.675975 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.675983 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650279b3-4add-4845-b440-bde55c40efb7-config\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:45 crc kubenswrapper[4991]: I0929 10:01:45.812634 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.027095 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerStarted","Data":"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920"} Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.033111 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerDied","Data":"da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80"} Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.032812 4991 generic.go:334] "Generic (PLEG): container finished" podID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerID="da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80" exitCode=0 Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.034398 4991 generic.go:334] "Generic (PLEG): container finished" podID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerID="ff57d18c828d0c5bc8679efc2c6b494eebaad0e41d9336593017ce51e349f37c" exitCode=2 Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.034409 4991 generic.go:334] "Generic (PLEG): container finished" podID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerID="fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d" exitCode=0 Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.034460 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerDied","Data":"ff57d18c828d0c5bc8679efc2c6b494eebaad0e41d9336593017ce51e349f37c"} Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.034488 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerDied","Data":"fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d"} Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.040552 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.041751 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-ssx5j" event={"ID":"650279b3-4add-4845-b440-bde55c40efb7","Type":"ContainerDied","Data":"354a0ca2f087e75ea43e149926067806e8402b62197ec2c5f1865adbd7119092"} Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.041815 4991 scope.go:117] "RemoveContainer" containerID="06e50afa93d73c3d2c825326d8901994bf8885edfa459c9a32c5fbb8e47cbc84" Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.100750 4991 scope.go:117] "RemoveContainer" containerID="9955714aeb643fff4485e8cef89bcdcd6fe095b0e49eaa737387657db4a5ba7c" Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.107396 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.122024 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-ssx5j"] Sep 29 10:01:46 crc kubenswrapper[4991]: I0929 10:01:46.970597 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650279b3-4add-4845-b440-bde55c40efb7" path="/var/lib/kubelet/pods/650279b3-4add-4845-b440-bde55c40efb7/volumes" Sep 29 10:01:47 crc kubenswrapper[4991]: I0929 10:01:47.058842 4991 generic.go:334] "Generic (PLEG): container finished" podID="25df2bc4-583e-4075-9ed2-49128b9b8d2f" containerID="75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049" exitCode=0 Sep 29 10:01:47 crc kubenswrapper[4991]: I0929 10:01:47.059072 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qnxc" event={"ID":"25df2bc4-583e-4075-9ed2-49128b9b8d2f","Type":"ContainerDied","Data":"75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049"} Sep 29 10:01:47 crc kubenswrapper[4991]: I0929 10:01:47.537270 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:01:47 crc kubenswrapper[4991]: I0929 10:01:47.537681 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:01:48 crc kubenswrapper[4991]: I0929 10:01:48.076705 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerStarted","Data":"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf"} Sep 29 10:01:48 crc kubenswrapper[4991]: I0929 10:01:48.947770 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.071934 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data\") pod \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.072369 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle\") pod \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.072453 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpwrp\" (UniqueName: \"kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp\") pod \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.072567 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts\") pod \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\" (UID: \"25df2bc4-583e-4075-9ed2-49128b9b8d2f\") " Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.089013 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts" (OuterVolumeSpecName: "scripts") pod "25df2bc4-583e-4075-9ed2-49128b9b8d2f" (UID: "25df2bc4-583e-4075-9ed2-49128b9b8d2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.100623 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp" (OuterVolumeSpecName: "kube-api-access-tpwrp") pod "25df2bc4-583e-4075-9ed2-49128b9b8d2f" (UID: "25df2bc4-583e-4075-9ed2-49128b9b8d2f"). InnerVolumeSpecName "kube-api-access-tpwrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.113977 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerStarted","Data":"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011"} Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.115927 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7qnxc" event={"ID":"25df2bc4-583e-4075-9ed2-49128b9b8d2f","Type":"ContainerDied","Data":"fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a"} Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.115975 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.116038 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7qnxc" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.123327 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25df2bc4-583e-4075-9ed2-49128b9b8d2f" (UID: "25df2bc4-583e-4075-9ed2-49128b9b8d2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.152502 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data" (OuterVolumeSpecName: "config-data") pod "25df2bc4-583e-4075-9ed2-49128b9b8d2f" (UID: "25df2bc4-583e-4075-9ed2-49128b9b8d2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.176704 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.176752 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpwrp\" (UniqueName: \"kubernetes.io/projected/25df2bc4-583e-4075-9ed2-49128b9b8d2f-kube-api-access-tpwrp\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.176763 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.176771 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25df2bc4-583e-4075-9ed2-49128b9b8d2f-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.268296 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.269670 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-api" containerID="cri-o://4ba5f6a2709c35db22f25f02c77f300ad42e71bb86d725d4f3eecd4e5690ad0b" gracePeriod=30 Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.270778 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-log" containerID="cri-o://25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462" gracePeriod=30 Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.293180 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.293479 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerName="nova-scheduler-scheduler" containerID="cri-o://6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" gracePeriod=30 Sep 29 10:01:49 crc kubenswrapper[4991]: E0929 10:01:49.320064 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.322505 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.322738 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-log" containerID="cri-o://9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649" gracePeriod=30 Sep 29 10:01:49 crc kubenswrapper[4991]: I0929 10:01:49.322915 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-metadata" containerID="cri-o://697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e" gracePeriod=30 Sep 29 10:01:49 crc kubenswrapper[4991]: E0929 10:01:49.323763 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:01:49 crc kubenswrapper[4991]: E0929 10:01:49.325343 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:01:49 crc kubenswrapper[4991]: E0929 10:01:49.325407 4991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerName="nova-scheduler-scheduler" Sep 29 10:01:50 crc kubenswrapper[4991]: E0929 10:01:50.094832 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650279b3_4add_4845_b440_bde55c40efb7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650279b3_4add_4845_b440_bde55c40efb7.slice/crio-354a0ca2f087e75ea43e149926067806e8402b62197ec2c5f1865adbd7119092\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f93a627_850e_4991_aa3f_82372989186d.slice/crio-conmon-8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-conmon-75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f93a627_850e_4991_aa3f_82372989186d.slice/crio-8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:01:50 crc kubenswrapper[4991]: E0929 10:01:50.114760 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-conmon-75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-conmon-25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650279b3_4add_4845_b440_bde55c40efb7.slice/crio-354a0ca2f087e75ea43e149926067806e8402b62197ec2c5f1865adbd7119092\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650279b3_4add_4845_b440_bde55c40efb7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f93a627_850e_4991_aa3f_82372989186d.slice/crio-8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice/crio-fc9d6a62fbd54c1789ca5e86fd547b69f50cc923c282f37cb205bee7708ea69a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice\": RecentStats: unable to find data in memory cache]" Sep 29 10:01:50 crc kubenswrapper[4991]: E0929 10:01:50.113760 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f93a627_850e_4991_aa3f_82372989186d.slice/crio-8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-conmon-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25df2bc4_583e_4075_9ed2_49128b9b8d2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f93a627_850e_4991_aa3f_82372989186d.slice/crio-conmon-8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod894baba8_a990_4689_99b1_3d690c50b974.slice/crio-conmon-697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe1cd08_f374_4e04_b3d0_1135997d4b8f.slice/crio-60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0153100b_0ea2_47e5_a068_737a13cff807.slice/crio-conmon-25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.171737 4991 generic.go:334] "Generic (PLEG): container finished" podID="894baba8-a990-4689-99b1-3d690c50b974" containerID="697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e" exitCode=0 Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.171779 4991 generic.go:334] "Generic (PLEG): container finished" podID="894baba8-a990-4689-99b1-3d690c50b974" containerID="9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649" exitCode=143 Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.171837 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerDied","Data":"697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e"} Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.171889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerDied","Data":"9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649"} Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.179037 4991 generic.go:334] "Generic (PLEG): container finished" podID="0153100b-0ea2-47e5-a068-737a13cff807" containerID="25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462" exitCode=143 Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.179114 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerDied","Data":"25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462"} Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.184112 4991 generic.go:334] "Generic (PLEG): container finished" podID="9f93a627-850e-4991-aa3f-82372989186d" containerID="8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23" exitCode=0 Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.184187 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" event={"ID":"9f93a627-850e-4991-aa3f-82372989186d","Type":"ContainerDied","Data":"8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23"} Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.209355 4991 generic.go:334] "Generic (PLEG): container finished" podID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerID="60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39" exitCode=0 Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.209417 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerDied","Data":"60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39"} Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.284929 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.434400 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle\") pod \"894baba8-a990-4689-99b1-3d690c50b974\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.434527 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs\") pod \"894baba8-a990-4689-99b1-3d690c50b974\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.434615 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs\") pod \"894baba8-a990-4689-99b1-3d690c50b974\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.434718 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzfqq\" (UniqueName: \"kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq\") pod \"894baba8-a990-4689-99b1-3d690c50b974\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.434797 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data\") pod \"894baba8-a990-4689-99b1-3d690c50b974\" (UID: \"894baba8-a990-4689-99b1-3d690c50b974\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.437510 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs" (OuterVolumeSpecName: "logs") pod "894baba8-a990-4689-99b1-3d690c50b974" (UID: "894baba8-a990-4689-99b1-3d690c50b974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.445332 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq" (OuterVolumeSpecName: "kube-api-access-pzfqq") pod "894baba8-a990-4689-99b1-3d690c50b974" (UID: "894baba8-a990-4689-99b1-3d690c50b974"). InnerVolumeSpecName "kube-api-access-pzfqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.483905 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data" (OuterVolumeSpecName: "config-data") pod "894baba8-a990-4689-99b1-3d690c50b974" (UID: "894baba8-a990-4689-99b1-3d690c50b974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.485180 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "894baba8-a990-4689-99b1-3d690c50b974" (UID: "894baba8-a990-4689-99b1-3d690c50b974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.537895 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.537935 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.537971 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/894baba8-a990-4689-99b1-3d690c50b974-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.537984 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzfqq\" (UniqueName: \"kubernetes.io/projected/894baba8-a990-4689-99b1-3d690c50b974-kube-api-access-pzfqq\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.540982 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "894baba8-a990-4689-99b1-3d690c50b974" (UID: "894baba8-a990-4689-99b1-3d690c50b974"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.639634 4991 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/894baba8-a990-4689-99b1-3d690c50b974-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.688531 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.846749 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.846860 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lr2l\" (UniqueName: \"kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.846894 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.846926 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.846976 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.847193 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.847242 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml\") pod \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\" (UID: \"2fe1cd08-f374-4e04-b3d0-1135997d4b8f\") " Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.851488 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.852300 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.855583 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts" (OuterVolumeSpecName: "scripts") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.857414 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l" (OuterVolumeSpecName: "kube-api-access-7lr2l") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "kube-api-access-7lr2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.902407 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952153 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lr2l\" (UniqueName: \"kubernetes.io/projected/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-kube-api-access-7lr2l\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952181 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952192 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952201 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952208 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:50 crc kubenswrapper[4991]: I0929 10:01:50.952840 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.015468 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data" (OuterVolumeSpecName: "config-data") pod "2fe1cd08-f374-4e04-b3d0-1135997d4b8f" (UID: "2fe1cd08-f374-4e04-b3d0-1135997d4b8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.054508 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.054538 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe1cd08-f374-4e04-b3d0-1135997d4b8f-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.224151 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fe1cd08-f374-4e04-b3d0-1135997d4b8f","Type":"ContainerDied","Data":"82f2ae3bc2c95eb8bb956ea4f9ff26ef084a8f802ea375acaf035aab31a8fc6e"} Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.224439 4991 scope.go:117] "RemoveContainer" containerID="da27c877354493c6fb6deb9bc693d4dcfb3fbf37a95d0f4f845cec07bcb6be80" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.224583 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.236481 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"894baba8-a990-4689-99b1-3d690c50b974","Type":"ContainerDied","Data":"cd5405477101b5780ffab9e0e2c0eb5631069ca5560ddc52e9f5b8460e994065"} Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.236592 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.245000 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerStarted","Data":"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00"} Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.245268 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-notifier" containerID="cri-o://50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" gracePeriod=30 Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.245296 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-listener" containerID="cri-o://7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" gracePeriod=30 Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.245294 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-api" containerID="cri-o://dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" gracePeriod=30 Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.245275 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-evaluator" containerID="cri-o://9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" gracePeriod=30 Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.275936 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.302363 4991 scope.go:117] "RemoveContainer" containerID="ff57d18c828d0c5bc8679efc2c6b494eebaad0e41d9336593017ce51e349f37c" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.302537 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.346651 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.369318 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.380512 4991 scope.go:117] "RemoveContainer" containerID="60cd1d9edcf76bd57ef8988422573553f0e89817a2ba0309b734032a6db16b39" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384069 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384650 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="sg-core" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384677 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="sg-core" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384691 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-log" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384700 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-log" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384728 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-central-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384735 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-central-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384754 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-notification-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384763 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-notification-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384774 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25df2bc4-583e-4075-9ed2-49128b9b8d2f" containerName="nova-manage" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384782 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="25df2bc4-583e-4075-9ed2-49128b9b8d2f" containerName="nova-manage" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384813 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-metadata" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384823 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-metadata" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384836 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="proxy-httpd" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384844 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="proxy-httpd" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384878 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="dnsmasq-dns" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384887 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="dnsmasq-dns" Sep 29 10:01:51 crc kubenswrapper[4991]: E0929 10:01:51.384906 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="init" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.384913 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="init" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385198 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-central-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385216 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-log" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385229 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="proxy-httpd" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385246 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="894baba8-a990-4689-99b1-3d690c50b974" containerName="nova-metadata-metadata" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385267 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="sg-core" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385277 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" containerName="ceilometer-notification-agent" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385294 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="25df2bc4-583e-4075-9ed2-49128b9b8d2f" containerName="nova-manage" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.385311 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="650279b3-4add-4845-b440-bde55c40efb7" containerName="dnsmasq-dns" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.386880 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.393400 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.393628 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.406817 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.428772 4991 scope.go:117] "RemoveContainer" containerID="fe17d62e3fd04d57d31901b12f0f1dec5273df5afd3e8bda91531bd16b93c31d" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.433916 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.437662 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.440259 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.440320 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.446020 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.494534978 podStartE2EDuration="9.446001327s" podCreationTimestamp="2025-09-29 10:01:42 +0000 UTC" firstStartedPulling="2025-09-29 10:01:43.792029736 +0000 UTC m=+1439.647957764" lastFinishedPulling="2025-09-29 10:01:50.743496085 +0000 UTC m=+1446.599424113" observedRunningTime="2025-09-29 10:01:51.315306134 +0000 UTC m=+1447.171234172" watchObservedRunningTime="2025-09-29 10:01:51.446001327 +0000 UTC m=+1447.301929355" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.464082 4991 scope.go:117] "RemoveContainer" containerID="697b7e3ee917657d7635e5d96733748144d1e13be56b531454c369586aa0dd0e" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.465541 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.465662 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.465772 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9f4h\" (UniqueName: \"kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.465810 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.465835 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.471002 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.525254 4991 scope.go:117] "RemoveContainer" containerID="9e465b91d5dec1651d06fa8995f79e69ea3fbc9074d44c03acad2249069b7649" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568553 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568659 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568702 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568790 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568824 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9bq\" (UniqueName: \"kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568913 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.568974 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.569171 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9f4h\" (UniqueName: \"kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.569219 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.569248 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.569271 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.569306 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.570898 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.581051 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.581601 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.590563 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9f4h\" (UniqueName: \"kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.592348 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.672349 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.681085 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.681349 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9bq\" (UniqueName: \"kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.681648 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.682121 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.682244 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.682455 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.673830 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.696600 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.698379 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.698895 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.703072 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.719742 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.723757 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.730895 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9bq\" (UniqueName: \"kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq\") pod \"ceilometer-0\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " pod="openstack/ceilometer-0" Sep 29 10:01:51 crc kubenswrapper[4991]: I0929 10:01:51.796633 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.131715 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.241516 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data\") pod \"9f93a627-850e-4991-aa3f-82372989186d\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.241562 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4r4\" (UniqueName: \"kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4\") pod \"9f93a627-850e-4991-aa3f-82372989186d\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.241802 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle\") pod \"9f93a627-850e-4991-aa3f-82372989186d\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.241938 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts\") pod \"9f93a627-850e-4991-aa3f-82372989186d\" (UID: \"9f93a627-850e-4991-aa3f-82372989186d\") " Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.276576 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts" (OuterVolumeSpecName: "scripts") pod "9f93a627-850e-4991-aa3f-82372989186d" (UID: "9f93a627-850e-4991-aa3f-82372989186d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.278800 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4" (OuterVolumeSpecName: "kube-api-access-7z4r4") pod "9f93a627-850e-4991-aa3f-82372989186d" (UID: "9f93a627-850e-4991-aa3f-82372989186d"). InnerVolumeSpecName "kube-api-access-7z4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.316476 4991 generic.go:334] "Generic (PLEG): container finished" podID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerID="9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" exitCode=0 Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.316522 4991 generic.go:334] "Generic (PLEG): container finished" podID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerID="dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" exitCode=0 Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.316574 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerDied","Data":"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf"} Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.316600 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerDied","Data":"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920"} Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.325989 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f93a627-850e-4991-aa3f-82372989186d" (UID: "9f93a627-850e-4991-aa3f-82372989186d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.347179 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data" (OuterVolumeSpecName: "config-data") pod "9f93a627-850e-4991-aa3f-82372989186d" (UID: "9f93a627-850e-4991-aa3f-82372989186d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.349120 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.349155 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4r4\" (UniqueName: \"kubernetes.io/projected/9f93a627-850e-4991-aa3f-82372989186d-kube-api-access-7z4r4\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.349189 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.349201 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f93a627-850e-4991-aa3f-82372989186d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.355589 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" event={"ID":"9f93a627-850e-4991-aa3f-82372989186d","Type":"ContainerDied","Data":"a97c59be76a7f218296806c6581e6efd3b9a54d4b51473455023cd6a6f90a0a9"} Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.355633 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97c59be76a7f218296806c6581e6efd3b9a54d4b51473455023cd6a6f90a0a9" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.355727 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8q4rd" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.369431 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 29 10:01:52 crc kubenswrapper[4991]: E0929 10:01:52.370206 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f93a627-850e-4991-aa3f-82372989186d" containerName="nova-cell1-conductor-db-sync" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.370235 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f93a627-850e-4991-aa3f-82372989186d" containerName="nova-cell1-conductor-db-sync" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.370553 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f93a627-850e-4991-aa3f-82372989186d" containerName="nova-cell1-conductor-db-sync" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.372107 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.378402 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.381290 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.451613 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7v2f\" (UniqueName: \"kubernetes.io/projected/83ac6e99-3bdd-44a3-8012-20dc4d70740e-kube-api-access-w7v2f\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.451673 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.451887 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.553510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.553838 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7v2f\" (UniqueName: \"kubernetes.io/projected/83ac6e99-3bdd-44a3-8012-20dc4d70740e-kube-api-access-w7v2f\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.553870 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.560963 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.567777 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ac6e99-3bdd-44a3-8012-20dc4d70740e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.573475 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7v2f\" (UniqueName: \"kubernetes.io/projected/83ac6e99-3bdd-44a3-8012-20dc4d70740e-kube-api-access-w7v2f\") pod \"nova-cell1-conductor-0\" (UID: \"83ac6e99-3bdd-44a3-8012-20dc4d70740e\") " pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.590218 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.708062 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.816031 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:01:52 crc kubenswrapper[4991]: W0929 10:01:52.843619 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c4e277_a117_4217_8f8b_63ef72a8ae42.slice/crio-2f61b709c9b437898c250b7248dabb96ecb5194e41dcea4ea435e5167d772971 WatchSource:0}: Error finding container 2f61b709c9b437898c250b7248dabb96ecb5194e41dcea4ea435e5167d772971: Status 404 returned error can't find the container with id 2f61b709c9b437898c250b7248dabb96ecb5194e41dcea4ea435e5167d772971 Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.955990 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe1cd08-f374-4e04-b3d0-1135997d4b8f" path="/var/lib/kubelet/pods/2fe1cd08-f374-4e04-b3d0-1135997d4b8f/volumes" Sep 29 10:01:52 crc kubenswrapper[4991]: I0929 10:01:52.957006 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894baba8-a990-4689-99b1-3d690c50b974" path="/var/lib/kubelet/pods/894baba8-a990-4689-99b1-3d690c50b974/volumes" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.288197 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.382688 4991 generic.go:334] "Generic (PLEG): container finished" podID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" exitCode=0 Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.382781 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4","Type":"ContainerDied","Data":"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.382816 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4","Type":"ContainerDied","Data":"ec63e14cafa0916924d673f410b79dc0809726867dabbae96dc9bcf388aa3dc2"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.382854 4991 scope.go:117] "RemoveContainer" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.383034 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.393313 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerStarted","Data":"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.393357 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerStarted","Data":"f41e5a73834ac7701790c0a880207a52e33c80f37837bf604f7171b0ee2cd0e9"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.395860 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerStarted","Data":"2f61b709c9b437898c250b7248dabb96ecb5194e41dcea4ea435e5167d772971"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.399677 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle\") pod \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.399818 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data\") pod \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.400236 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wdks\" (UniqueName: \"kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks\") pod \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\" (UID: \"1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4\") " Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.403818 4991 generic.go:334] "Generic (PLEG): container finished" podID="0153100b-0ea2-47e5-a068-737a13cff807" containerID="4ba5f6a2709c35db22f25f02c77f300ad42e71bb86d725d4f3eecd4e5690ad0b" exitCode=0 Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.403898 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerDied","Data":"4ba5f6a2709c35db22f25f02c77f300ad42e71bb86d725d4f3eecd4e5690ad0b"} Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.405407 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks" (OuterVolumeSpecName: "kube-api-access-5wdks") pod "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" (UID: "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4"). InnerVolumeSpecName "kube-api-access-5wdks". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.439765 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" (UID: "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.451989 4991 scope.go:117] "RemoveContainer" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" Sep 29 10:01:53 crc kubenswrapper[4991]: E0929 10:01:53.454047 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c\": container with ID starting with 6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c not found: ID does not exist" containerID="6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.454089 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c"} err="failed to get container status \"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c\": rpc error: code = NotFound desc = could not find container \"6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c\": container with ID starting with 6619ff28d4494d87e92597ba8a24e71d4b33db744d1f7a13f571758dce0e9d4c not found: ID does not exist" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.473871 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data" (OuterVolumeSpecName: "config-data") pod "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" (UID: "1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.503642 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wdks\" (UniqueName: \"kubernetes.io/projected/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-kube-api-access-5wdks\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.505329 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:53 crc kubenswrapper[4991]: I0929 10:01:53.505504 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.595305 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.708912 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle\") pod \"0153100b-0ea2-47e5-a068-737a13cff807\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.709136 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data\") pod \"0153100b-0ea2-47e5-a068-737a13cff807\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.709722 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs\") pod \"0153100b-0ea2-47e5-a068-737a13cff807\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.709762 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vdp7\" (UniqueName: \"kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7\") pod \"0153100b-0ea2-47e5-a068-737a13cff807\" (UID: \"0153100b-0ea2-47e5-a068-737a13cff807\") " Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.712377 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs" (OuterVolumeSpecName: "logs") pod "0153100b-0ea2-47e5-a068-737a13cff807" (UID: "0153100b-0ea2-47e5-a068-737a13cff807"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.738201 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7" (OuterVolumeSpecName: "kube-api-access-4vdp7") pod "0153100b-0ea2-47e5-a068-737a13cff807" (UID: "0153100b-0ea2-47e5-a068-737a13cff807"). InnerVolumeSpecName "kube-api-access-4vdp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.756914 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0153100b-0ea2-47e5-a068-737a13cff807" (UID: "0153100b-0ea2-47e5-a068-737a13cff807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.779338 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data" (OuterVolumeSpecName: "config-data") pod "0153100b-0ea2-47e5-a068-737a13cff807" (UID: "0153100b-0ea2-47e5-a068-737a13cff807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.791377 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.808119 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.813626 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.813650 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0153100b-0ea2-47e5-a068-737a13cff807-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.813659 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vdp7\" (UniqueName: \"kubernetes.io/projected/0153100b-0ea2-47e5-a068-737a13cff807-kube-api-access-4vdp7\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.813669 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0153100b-0ea2-47e5-a068-737a13cff807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.825704 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: E0929 10:01:53.826627 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerName="nova-scheduler-scheduler" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826642 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerName="nova-scheduler-scheduler" Sep 29 10:01:54 crc kubenswrapper[4991]: E0929 10:01:53.826685 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-log" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826692 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-log" Sep 29 10:01:54 crc kubenswrapper[4991]: E0929 10:01:53.826709 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-api" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826714 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-api" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826960 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-api" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826973 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0153100b-0ea2-47e5-a068-737a13cff807" containerName="nova-api-log" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.826995 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" containerName="nova-scheduler-scheduler" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.827766 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.829671 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.847487 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.861469 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.915680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.916078 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5gg\" (UniqueName: \"kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:53.916118 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.019683 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.019792 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5gg\" (UniqueName: \"kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.019850 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.024574 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.028622 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.046876 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5gg\" (UniqueName: \"kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg\") pod \"nova-scheduler-0\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.300623 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.457250 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerStarted","Data":"a4c58eac7e0941518332e1b0df2fa8001df714dcb019be248e23e3eaf7e1c6aa"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.458252 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerStarted","Data":"b8d2cf4bfb7604ac5298853b3d7e3a70d3acba6d9d5d68291d83b80cc718ea45"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.469754 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0153100b-0ea2-47e5-a068-737a13cff807","Type":"ContainerDied","Data":"2e9046474c441c3790df0271ab48df25d0896c0b864f9d0e53c3b97f0f7945e0"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.469812 4991 scope.go:117] "RemoveContainer" containerID="4ba5f6a2709c35db22f25f02c77f300ad42e71bb86d725d4f3eecd4e5690ad0b" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.469934 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.487103 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83ac6e99-3bdd-44a3-8012-20dc4d70740e","Type":"ContainerStarted","Data":"afd0d64a652d2c31f443b4fac74c37c8813ce35ed674c721598618d81baab47d"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.487147 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83ac6e99-3bdd-44a3-8012-20dc4d70740e","Type":"ContainerStarted","Data":"842f5ccdd062e16c9239ec71930c8662a7d445ccfb15f73b43b520cdc3f28ff8"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.488124 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.501587 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerStarted","Data":"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff"} Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.525713 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.525690734 podStartE2EDuration="2.525690734s" podCreationTimestamp="2025-09-29 10:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:54.518440554 +0000 UTC m=+1450.374368582" watchObservedRunningTime="2025-09-29 10:01:54.525690734 +0000 UTC m=+1450.381618762" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.610764 4991 scope.go:117] "RemoveContainer" containerID="25a5235b77becca96b23f33742d51f5a827a35860ed01ae1c608e3448d06a462" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.670321 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.683038 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.741439 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.744231 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.750339 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.777642 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.792742 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.792719858 podStartE2EDuration="3.792719858s" podCreationTimestamp="2025-09-29 10:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:54.605251164 +0000 UTC m=+1450.461179212" watchObservedRunningTime="2025-09-29 10:01:54.792719858 +0000 UTC m=+1450.648647886" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.861320 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bzc\" (UniqueName: \"kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.861402 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.861468 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.861494 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.963570 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4bzc\" (UniqueName: \"kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.963905 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.964055 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.964110 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:54 crc kubenswrapper[4991]: I0929 10:01:54.964620 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.010313 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4bzc\" (UniqueName: \"kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.011347 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.011979 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data\") pod \"nova-api-0\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " pod="openstack/nova-api-0" Sep 29 10:01:55 crc kubenswrapper[4991]: W0929 10:01:55.024557 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dfc8022_4980_4498_b5dd_26c32bf05ea1.slice/crio-c16490b4910476680a2d563424849ff30a0defb6fb29c3bcd67e95f5e10474ad WatchSource:0}: Error finding container c16490b4910476680a2d563424849ff30a0defb6fb29c3bcd67e95f5e10474ad: Status 404 returned error can't find the container with id c16490b4910476680a2d563424849ff30a0defb6fb29c3bcd67e95f5e10474ad Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.026170 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0153100b-0ea2-47e5-a068-737a13cff807" path="/var/lib/kubelet/pods/0153100b-0ea2-47e5-a068-737a13cff807/volumes" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.027461 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4" path="/var/lib/kubelet/pods/1e6b4163-e6b8-47d4-8a74-bea7c6dcc6a4/volumes" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.028284 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.109715 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.513850 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc8022-4980-4498-b5dd-26c32bf05ea1","Type":"ContainerStarted","Data":"e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66"} Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.514182 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc8022-4980-4498-b5dd-26c32bf05ea1","Type":"ContainerStarted","Data":"c16490b4910476680a2d563424849ff30a0defb6fb29c3bcd67e95f5e10474ad"} Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.517217 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerStarted","Data":"3f390480f410a76572176ef757b5c0e7a1b951469445f6805a0a5fbeffa03d1d"} Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.541676 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.541655419 podStartE2EDuration="2.541655419s" podCreationTimestamp="2025-09-29 10:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:55.531384259 +0000 UTC m=+1451.387312277" watchObservedRunningTime="2025-09-29 10:01:55.541655419 +0000 UTC m=+1451.397583437" Sep 29 10:01:55 crc kubenswrapper[4991]: I0929 10:01:55.604113 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:01:55 crc kubenswrapper[4991]: W0929 10:01:55.604777 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb98437_2f55_42d9_af85_9c89223c7279.slice/crio-4166415c629b53c08fb47c68f93854bc491f1d6e5df2d6e040777a76dcbf5d36 WatchSource:0}: Error finding container 4166415c629b53c08fb47c68f93854bc491f1d6e5df2d6e040777a76dcbf5d36: Status 404 returned error can't find the container with id 4166415c629b53c08fb47c68f93854bc491f1d6e5df2d6e040777a76dcbf5d36 Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.538883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerStarted","Data":"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4"} Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.539327 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerStarted","Data":"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69"} Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.539340 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerStarted","Data":"4166415c629b53c08fb47c68f93854bc491f1d6e5df2d6e040777a76dcbf5d36"} Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.574127 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.574109685 podStartE2EDuration="2.574109685s" podCreationTimestamp="2025-09-29 10:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:01:56.56667296 +0000 UTC m=+1452.422600998" watchObservedRunningTime="2025-09-29 10:01:56.574109685 +0000 UTC m=+1452.430037713" Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.724568 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:01:56 crc kubenswrapper[4991]: I0929 10:01:56.725088 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:01:57 crc kubenswrapper[4991]: I0929 10:01:57.553463 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerStarted","Data":"3a2ae225196caa9fa4202ffd87295f0df0bd9423f0799197434d8c8a59f8f0fc"} Sep 29 10:01:57 crc kubenswrapper[4991]: I0929 10:01:57.553984 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:01:57 crc kubenswrapper[4991]: I0929 10:01:57.582822 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.923775946 podStartE2EDuration="6.582801899s" podCreationTimestamp="2025-09-29 10:01:51 +0000 UTC" firstStartedPulling="2025-09-29 10:01:52.880524445 +0000 UTC m=+1448.736452483" lastFinishedPulling="2025-09-29 10:01:56.539550398 +0000 UTC m=+1452.395478436" observedRunningTime="2025-09-29 10:01:57.577565612 +0000 UTC m=+1453.433493650" watchObservedRunningTime="2025-09-29 10:01:57.582801899 +0000 UTC m=+1453.438729937" Sep 29 10:01:59 crc kubenswrapper[4991]: I0929 10:01:59.302321 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 29 10:02:01 crc kubenswrapper[4991]: I0929 10:02:01.724472 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 29 10:02:01 crc kubenswrapper[4991]: I0929 10:02:01.725076 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 29 10:02:02 crc kubenswrapper[4991]: I0929 10:02:02.775196 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Sep 29 10:02:02 crc kubenswrapper[4991]: I0929 10:02:02.776558 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:02 crc kubenswrapper[4991]: I0929 10:02:02.776699 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:04 crc kubenswrapper[4991]: I0929 10:02:04.301449 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 29 10:02:04 crc kubenswrapper[4991]: I0929 10:02:04.345688 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 29 10:02:04 crc kubenswrapper[4991]: I0929 10:02:04.680690 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 29 10:02:05 crc kubenswrapper[4991]: I0929 10:02:05.110552 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:02:05 crc kubenswrapper[4991]: I0929 10:02:05.110784 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:02:06 crc kubenswrapper[4991]: I0929 10:02:06.193297 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:06 crc kubenswrapper[4991]: I0929 10:02:06.193319 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:07 crc kubenswrapper[4991]: I0929 10:02:07.947168 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:02:07 crc kubenswrapper[4991]: I0929 10:02:07.947262 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:02:07 crc kubenswrapper[4991]: I0929 10:02:07.947314 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:02:07 crc kubenswrapper[4991]: I0929 10:02:07.948267 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:02:07 crc kubenswrapper[4991]: I0929 10:02:07.948342 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1" gracePeriod=600 Sep 29 10:02:08 crc kubenswrapper[4991]: I0929 10:02:08.711334 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1" exitCode=0 Sep 29 10:02:08 crc kubenswrapper[4991]: I0929 10:02:08.711403 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1"} Sep 29 10:02:08 crc kubenswrapper[4991]: I0929 10:02:08.711700 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357"} Sep 29 10:02:08 crc kubenswrapper[4991]: I0929 10:02:08.711740 4991 scope.go:117] "RemoveContainer" containerID="dd958863ece520e3b95f59e822742012518d2caf2d8e6d1053c23d5cf887fc5a" Sep 29 10:02:10 crc kubenswrapper[4991]: E0929 10:02:10.889930 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9282497b_f696_4547_8dc6_11f5a0c867ab.slice/crio-conmon-f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.598981 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.647211 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle\") pod \"9282497b-f696-4547-8dc6-11f5a0c867ab\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.647542 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w6zh\" (UniqueName: \"kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh\") pod \"9282497b-f696-4547-8dc6-11f5a0c867ab\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.652784 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh" (OuterVolumeSpecName: "kube-api-access-4w6zh") pod "9282497b-f696-4547-8dc6-11f5a0c867ab" (UID: "9282497b-f696-4547-8dc6-11f5a0c867ab"). InnerVolumeSpecName "kube-api-access-4w6zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.682812 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9282497b-f696-4547-8dc6-11f5a0c867ab" (UID: "9282497b-f696-4547-8dc6-11f5a0c867ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.730244 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.734564 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.734766 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.750601 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data\") pod \"9282497b-f696-4547-8dc6-11f5a0c867ab\" (UID: \"9282497b-f696-4547-8dc6-11f5a0c867ab\") " Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751134 4991 generic.go:334] "Generic (PLEG): container finished" podID="9282497b-f696-4547-8dc6-11f5a0c867ab" containerID="f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb" exitCode=137 Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751347 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751473 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9282497b-f696-4547-8dc6-11f5a0c867ab","Type":"ContainerDied","Data":"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb"} Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751513 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9282497b-f696-4547-8dc6-11f5a0c867ab","Type":"ContainerDied","Data":"f3ddd89d44f47e6d0c72d91076d20d49e2fc33865795edcf8e3a2ac51a9273b1"} Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751549 4991 scope.go:117] "RemoveContainer" containerID="f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751931 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.751974 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w6zh\" (UniqueName: \"kubernetes.io/projected/9282497b-f696-4547-8dc6-11f5a0c867ab-kube-api-access-4w6zh\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.774428 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.780289 4991 scope.go:117] "RemoveContainer" containerID="f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb" Sep 29 10:02:11 crc kubenswrapper[4991]: E0929 10:02:11.781894 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb\": container with ID starting with f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb not found: ID does not exist" containerID="f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.781938 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb"} err="failed to get container status \"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb\": rpc error: code = NotFound desc = could not find container \"f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb\": container with ID starting with f4b894a48caf5b2e180b37a0c45e45d382aa44b58d536b2b83a57a14562d1ccb not found: ID does not exist" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.813074 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data" (OuterVolumeSpecName: "config-data") pod "9282497b-f696-4547-8dc6-11f5a0c867ab" (UID: "9282497b-f696-4547-8dc6-11f5a0c867ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:11 crc kubenswrapper[4991]: I0929 10:02:11.853818 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9282497b-f696-4547-8dc6-11f5a0c867ab-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.089728 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.101514 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.114534 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:02:12 crc kubenswrapper[4991]: E0929 10:02:12.115090 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9282497b-f696-4547-8dc6-11f5a0c867ab" containerName="nova-cell1-novncproxy-novncproxy" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.115109 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9282497b-f696-4547-8dc6-11f5a0c867ab" containerName="nova-cell1-novncproxy-novncproxy" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.115308 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9282497b-f696-4547-8dc6-11f5a0c867ab" containerName="nova-cell1-novncproxy-novncproxy" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.116144 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.118532 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.118721 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.118837 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.128111 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.265331 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29pj\" (UniqueName: \"kubernetes.io/projected/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-kube-api-access-m29pj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.265413 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.265517 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.265652 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.265765 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.367509 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.367601 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.367706 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.367792 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29pj\" (UniqueName: \"kubernetes.io/projected/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-kube-api-access-m29pj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.367864 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.373635 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.373805 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.374184 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.374510 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.392562 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29pj\" (UniqueName: \"kubernetes.io/projected/b4e6fdce-98a4-4477-b438-b93af1ab5e5b-kube-api-access-m29pj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4e6fdce-98a4-4477-b438-b93af1ab5e5b\") " pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.441332 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.955067 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9282497b-f696-4547-8dc6-11f5a0c867ab" path="/var/lib/kubelet/pods/9282497b-f696-4547-8dc6-11f5a0c867ab/volumes" Sep 29 10:02:12 crc kubenswrapper[4991]: I0929 10:02:12.956161 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 29 10:02:12 crc kubenswrapper[4991]: W0929 10:02:12.956711 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4e6fdce_98a4_4477_b438_b93af1ab5e5b.slice/crio-3f2f0e2072a78352bfe48c9aaecd4ecc07a315b04a02465945945e16d7ad90bf WatchSource:0}: Error finding container 3f2f0e2072a78352bfe48c9aaecd4ecc07a315b04a02465945945e16d7ad90bf: Status 404 returned error can't find the container with id 3f2f0e2072a78352bfe48c9aaecd4ecc07a315b04a02465945945e16d7ad90bf Sep 29 10:02:13 crc kubenswrapper[4991]: I0929 10:02:13.779981 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4e6fdce-98a4-4477-b438-b93af1ab5e5b","Type":"ContainerStarted","Data":"d32162d927381bef81a772a0e7c7ea9e14e2afbe9840b74802323b1cb33581aa"} Sep 29 10:02:13 crc kubenswrapper[4991]: I0929 10:02:13.780404 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4e6fdce-98a4-4477-b438-b93af1ab5e5b","Type":"ContainerStarted","Data":"3f2f0e2072a78352bfe48c9aaecd4ecc07a315b04a02465945945e16d7ad90bf"} Sep 29 10:02:13 crc kubenswrapper[4991]: I0929 10:02:13.819917 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.819894606 podStartE2EDuration="1.819894606s" podCreationTimestamp="2025-09-29 10:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:13.805302003 +0000 UTC m=+1469.661230051" watchObservedRunningTime="2025-09-29 10:02:13.819894606 +0000 UTC m=+1469.675822644" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.115482 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.116331 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.119140 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.119326 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.807972 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 29 10:02:15 crc kubenswrapper[4991]: I0929 10:02:15.812361 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.064020 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.066628 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.105264 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.179445 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.179504 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.179778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzrc\" (UniqueName: \"kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.179893 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.179975 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.180126 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.282842 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzrc\" (UniqueName: \"kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.282921 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.282969 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.283051 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.283206 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.283224 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.284397 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.284435 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.284449 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.285249 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.285254 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.309528 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzrc\" (UniqueName: \"kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc\") pod \"dnsmasq-dns-6d99f6bc7f-fj7w9\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.407728 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:16 crc kubenswrapper[4991]: I0929 10:02:16.950040 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:02:17 crc kubenswrapper[4991]: I0929 10:02:17.441603 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:17 crc kubenswrapper[4991]: I0929 10:02:17.835504 4991 generic.go:334] "Generic (PLEG): container finished" podID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerID="fe268abd38322fc4aaf5c7b3ac4d15b335b9033fb1be6dc4c7fb49b8e010e539" exitCode=0 Sep 29 10:02:17 crc kubenswrapper[4991]: I0929 10:02:17.835765 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" event={"ID":"1e665248-de57-45c0-8b0e-5ebc858626aa","Type":"ContainerDied","Data":"fe268abd38322fc4aaf5c7b3ac4d15b335b9033fb1be6dc4c7fb49b8e010e539"} Sep 29 10:02:17 crc kubenswrapper[4991]: I0929 10:02:17.835856 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" event={"ID":"1e665248-de57-45c0-8b0e-5ebc858626aa","Type":"ContainerStarted","Data":"110f285f49c4c20ca0c85f5f8ebe1c43d9e7f8c3c3402abe482ec4d8d4d237a2"} Sep 29 10:02:18 crc kubenswrapper[4991]: I0929 10:02:18.543256 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:18 crc kubenswrapper[4991]: I0929 10:02:18.849405 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" event={"ID":"1e665248-de57-45c0-8b0e-5ebc858626aa","Type":"ContainerStarted","Data":"786c8af6ca8b957cdcc9cf7e0a745774fa7bf0e571ff0db36144bdffab6cd14a"} Sep 29 10:02:18 crc kubenswrapper[4991]: I0929 10:02:18.849537 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-log" containerID="cri-o://d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69" gracePeriod=30 Sep 29 10:02:18 crc kubenswrapper[4991]: I0929 10:02:18.849753 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-api" containerID="cri-o://e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4" gracePeriod=30 Sep 29 10:02:18 crc kubenswrapper[4991]: I0929 10:02:18.880842 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" podStartSLOduration=2.88081852 podStartE2EDuration="2.88081852s" podCreationTimestamp="2025-09-29 10:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:18.876601579 +0000 UTC m=+1474.732529617" watchObservedRunningTime="2025-09-29 10:02:18.88081852 +0000 UTC m=+1474.736746548" Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.406781 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.407126 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-central-agent" containerID="cri-o://b8d2cf4bfb7604ac5298853b3d7e3a70d3acba6d9d5d68291d83b80cc718ea45" gracePeriod=30 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.407223 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-notification-agent" containerID="cri-o://a4c58eac7e0941518332e1b0df2fa8001df714dcb019be248e23e3eaf7e1c6aa" gracePeriod=30 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.407225 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="sg-core" containerID="cri-o://3f390480f410a76572176ef757b5c0e7a1b951469445f6805a0a5fbeffa03d1d" gracePeriod=30 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.407334 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="proxy-httpd" containerID="cri-o://3a2ae225196caa9fa4202ffd87295f0df0bd9423f0799197434d8c8a59f8f0fc" gracePeriod=30 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.420638 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.874939 4991 generic.go:334] "Generic (PLEG): container finished" podID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerID="3a2ae225196caa9fa4202ffd87295f0df0bd9423f0799197434d8c8a59f8f0fc" exitCode=0 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.876923 4991 generic.go:334] "Generic (PLEG): container finished" podID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerID="3f390480f410a76572176ef757b5c0e7a1b951469445f6805a0a5fbeffa03d1d" exitCode=2 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.877048 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerDied","Data":"3a2ae225196caa9fa4202ffd87295f0df0bd9423f0799197434d8c8a59f8f0fc"} Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.877088 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerDied","Data":"3f390480f410a76572176ef757b5c0e7a1b951469445f6805a0a5fbeffa03d1d"} Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.888741 4991 generic.go:334] "Generic (PLEG): container finished" podID="4fb98437-2f55-42d9-af85-9c89223c7279" containerID="d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69" exitCode=143 Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.890202 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerDied","Data":"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69"} Sep 29 10:02:19 crc kubenswrapper[4991]: I0929 10:02:19.890283 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:20 crc kubenswrapper[4991]: I0929 10:02:20.902142 4991 generic.go:334] "Generic (PLEG): container finished" podID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerID="a4c58eac7e0941518332e1b0df2fa8001df714dcb019be248e23e3eaf7e1c6aa" exitCode=0 Sep 29 10:02:20 crc kubenswrapper[4991]: I0929 10:02:20.902171 4991 generic.go:334] "Generic (PLEG): container finished" podID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerID="b8d2cf4bfb7604ac5298853b3d7e3a70d3acba6d9d5d68291d83b80cc718ea45" exitCode=0 Sep 29 10:02:20 crc kubenswrapper[4991]: I0929 10:02:20.902230 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerDied","Data":"a4c58eac7e0941518332e1b0df2fa8001df714dcb019be248e23e3eaf7e1c6aa"} Sep 29 10:02:20 crc kubenswrapper[4991]: I0929 10:02:20.902276 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerDied","Data":"b8d2cf4bfb7604ac5298853b3d7e3a70d3acba6d9d5d68291d83b80cc718ea45"} Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.542636 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641265 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641634 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641677 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641710 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9bq\" (UniqueName: \"kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641804 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.641914 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd\") pod \"22c4e277-a117-4217-8f8b-63ef72a8ae42\" (UID: \"22c4e277-a117-4217-8f8b-63ef72a8ae42\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.642878 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.643365 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.649665 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts" (OuterVolumeSpecName: "scripts") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.652106 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq" (OuterVolumeSpecName: "kube-api-access-rc9bq") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "kube-api-access-rc9bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.681014 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.728751 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744190 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts\") pod \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744323 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle\") pod \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744353 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rr65\" (UniqueName: \"kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65\") pod \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744459 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data\") pod \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\" (UID: \"ae1f4235-0ad5-4d8c-b84e-ebcbea326284\") " Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744904 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744922 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744930 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/22c4e277-a117-4217-8f8b-63ef72a8ae42-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744941 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.744964 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc9bq\" (UniqueName: \"kubernetes.io/projected/22c4e277-a117-4217-8f8b-63ef72a8ae42-kube-api-access-rc9bq\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.749140 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65" (OuterVolumeSpecName: "kube-api-access-7rr65") pod "ae1f4235-0ad5-4d8c-b84e-ebcbea326284" (UID: "ae1f4235-0ad5-4d8c-b84e-ebcbea326284"). InnerVolumeSpecName "kube-api-access-7rr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.772934 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.778153 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts" (OuterVolumeSpecName: "scripts") pod "ae1f4235-0ad5-4d8c-b84e-ebcbea326284" (UID: "ae1f4235-0ad5-4d8c-b84e-ebcbea326284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.807086 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data" (OuterVolumeSpecName: "config-data") pod "22c4e277-a117-4217-8f8b-63ef72a8ae42" (UID: "22c4e277-a117-4217-8f8b-63ef72a8ae42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.846641 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.846698 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.846714 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rr65\" (UniqueName: \"kubernetes.io/projected/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-kube-api-access-7rr65\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.846746 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22c4e277-a117-4217-8f8b-63ef72a8ae42-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.898061 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data" (OuterVolumeSpecName: "config-data") pod "ae1f4235-0ad5-4d8c-b84e-ebcbea326284" (UID: "ae1f4235-0ad5-4d8c-b84e-ebcbea326284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.917510 4991 generic.go:334] "Generic (PLEG): container finished" podID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerID="7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" exitCode=137 Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.918787 4991 generic.go:334] "Generic (PLEG): container finished" podID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerID="50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" exitCode=137 Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.918988 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerDied","Data":"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00"} Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.919141 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerDied","Data":"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011"} Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.919221 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae1f4235-0ad5-4d8c-b84e-ebcbea326284","Type":"ContainerDied","Data":"293bc4a537d829532d3da2fc06dc95dd49606c8745650c6ed21ff1630fda1114"} Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.919299 4991 scope.go:117] "RemoveContainer" containerID="7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.920240 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.921162 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae1f4235-0ad5-4d8c-b84e-ebcbea326284" (UID: "ae1f4235-0ad5-4d8c-b84e-ebcbea326284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.926763 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"22c4e277-a117-4217-8f8b-63ef72a8ae42","Type":"ContainerDied","Data":"2f61b709c9b437898c250b7248dabb96ecb5194e41dcea4ea435e5167d772971"} Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.927150 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.948624 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:21 crc kubenswrapper[4991]: I0929 10:02:21.948674 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae1f4235-0ad5-4d8c-b84e-ebcbea326284-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.177468 4991 scope.go:117] "RemoveContainer" containerID="50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.210802 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.241034 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250234 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250792 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="proxy-httpd" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250824 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="proxy-httpd" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250838 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-notifier" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250846 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-notifier" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250875 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="sg-core" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250881 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="sg-core" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250901 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-listener" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250907 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-listener" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250923 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-evaluator" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250929 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-evaluator" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250941 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-api" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250960 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-api" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250973 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-central-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250979 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-central-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.250992 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-notification-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.250997 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-notification-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251251 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="proxy-httpd" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251265 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-notification-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251272 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-notifier" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251285 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="ceilometer-central-agent" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251303 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-api" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251308 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-evaluator" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251321 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" containerName="aodh-listener" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.251333 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" containerName="sg-core" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.253641 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255238 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmxkn\" (UniqueName: \"kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255692 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255761 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255804 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255832 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.255868 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.257483 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.257685 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.262537 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.270510 4991 scope.go:117] "RemoveContainer" containerID="9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.317227 4991 scope.go:117] "RemoveContainer" containerID="dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.340494 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359332 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmxkn\" (UniqueName: \"kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359501 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359523 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359544 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359566 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.359592 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.360153 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.361517 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.362515 4991 scope.go:117] "RemoveContainer" containerID="7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.362887 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00\": container with ID starting with 7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00 not found: ID does not exist" containerID="7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.362926 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00"} err="failed to get container status \"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00\": rpc error: code = NotFound desc = could not find container \"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00\": container with ID starting with 7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.362983 4991 scope.go:117] "RemoveContainer" containerID="50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.363349 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011\": container with ID starting with 50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011 not found: ID does not exist" containerID="50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.363374 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011"} err="failed to get container status \"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011\": rpc error: code = NotFound desc = could not find container \"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011\": container with ID starting with 50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.363391 4991 scope.go:117] "RemoveContainer" containerID="9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.364119 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf\": container with ID starting with 9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf not found: ID does not exist" containerID="9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364147 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf"} err="failed to get container status \"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf\": rpc error: code = NotFound desc = could not find container \"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf\": container with ID starting with 9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364165 4991 scope.go:117] "RemoveContainer" containerID="dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" Sep 29 10:02:22 crc kubenswrapper[4991]: E0929 10:02:22.364392 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920\": container with ID starting with dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920 not found: ID does not exist" containerID="dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364415 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920"} err="failed to get container status \"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920\": rpc error: code = NotFound desc = could not find container \"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920\": container with ID starting with dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364433 4991 scope.go:117] "RemoveContainer" containerID="7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364741 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00"} err="failed to get container status \"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00\": rpc error: code = NotFound desc = could not find container \"7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00\": container with ID starting with 7ea8e47d058619473798c907e805b0a2ab96ae5d85f5fae755db18b636ff0f00 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.364756 4991 scope.go:117] "RemoveContainer" containerID="50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.365027 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011"} err="failed to get container status \"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011\": rpc error: code = NotFound desc = could not find container \"50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011\": container with ID starting with 50e270cd455603b317af0e3357c24d0a0d996e9892604b84a21b10df081c6011 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.365046 4991 scope.go:117] "RemoveContainer" containerID="9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366033 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366150 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf"} err="failed to get container status \"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf\": rpc error: code = NotFound desc = could not find container \"9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf\": container with ID starting with 9ec98a89ffd1cb8ca3baecf7840870d164ec0e67f37ca94396e47c85c0028ddf not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366177 4991 scope.go:117] "RemoveContainer" containerID="dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366579 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366793 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920"} err="failed to get container status \"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920\": rpc error: code = NotFound desc = could not find container \"dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920\": container with ID starting with dd81b1665a1d83895a217256b023e2464a027cd105b7977ad9336b0d21718920 not found: ID does not exist" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.366814 4991 scope.go:117] "RemoveContainer" containerID="3a2ae225196caa9fa4202ffd87295f0df0bd9423f0799197434d8c8a59f8f0fc" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.368033 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.368122 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.372569 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.376606 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmxkn\" (UniqueName: \"kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn\") pod \"ceilometer-0\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.380840 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.390596 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.393326 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.400268 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.400350 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.400523 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.400757 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.401594 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qxcgx" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.442085 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.464535 4991 scope.go:117] "RemoveContainer" containerID="3f390480f410a76572176ef757b5c0e7a1b951469445f6805a0a5fbeffa03d1d" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.480104 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.498527 4991 scope.go:117] "RemoveContainer" containerID="a4c58eac7e0941518332e1b0df2fa8001df714dcb019be248e23e3eaf7e1c6aa" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.518398 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.524316 4991 scope.go:117] "RemoveContainer" containerID="b8d2cf4bfb7604ac5298853b3d7e3a70d3acba6d9d5d68291d83b80cc718ea45" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.566156 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.566240 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.566259 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcrx\" (UniqueName: \"kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.567732 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.567850 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.567920 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.594224 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.669325 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle\") pod \"4fb98437-2f55-42d9-af85-9c89223c7279\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.669705 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data\") pod \"4fb98437-2f55-42d9-af85-9c89223c7279\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.669905 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs\") pod \"4fb98437-2f55-42d9-af85-9c89223c7279\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.670347 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4bzc\" (UniqueName: \"kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc\") pod \"4fb98437-2f55-42d9-af85-9c89223c7279\" (UID: \"4fb98437-2f55-42d9-af85-9c89223c7279\") " Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.670512 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs" (OuterVolumeSpecName: "logs") pod "4fb98437-2f55-42d9-af85-9c89223c7279" (UID: "4fb98437-2f55-42d9-af85-9c89223c7279"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.675406 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676177 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676250 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676265 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcrx\" (UniqueName: \"kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676506 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676538 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc" (OuterVolumeSpecName: "kube-api-access-z4bzc") pod "4fb98437-2f55-42d9-af85-9c89223c7279" (UID: "4fb98437-2f55-42d9-af85-9c89223c7279"). InnerVolumeSpecName "kube-api-access-z4bzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676559 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676875 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb98437-2f55-42d9-af85-9c89223c7279-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.676897 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4bzc\" (UniqueName: \"kubernetes.io/projected/4fb98437-2f55-42d9-af85-9c89223c7279-kube-api-access-z4bzc\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.686339 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.687343 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.687692 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.687870 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.687918 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.699280 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcrx\" (UniqueName: \"kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx\") pod \"aodh-0\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.735694 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fb98437-2f55-42d9-af85-9c89223c7279" (UID: "4fb98437-2f55-42d9-af85-9c89223c7279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.743876 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data" (OuterVolumeSpecName: "config-data") pod "4fb98437-2f55-42d9-af85-9c89223c7279" (UID: "4fb98437-2f55-42d9-af85-9c89223c7279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.758753 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.782189 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.782235 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb98437-2f55-42d9-af85-9c89223c7279-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.954906 4991 generic.go:334] "Generic (PLEG): container finished" podID="4fb98437-2f55-42d9-af85-9c89223c7279" containerID="e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4" exitCode=0 Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.955084 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.957532 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c4e277-a117-4217-8f8b-63ef72a8ae42" path="/var/lib/kubelet/pods/22c4e277-a117-4217-8f8b-63ef72a8ae42/volumes" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.958358 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1f4235-0ad5-4d8c-b84e-ebcbea326284" path="/var/lib/kubelet/pods/ae1f4235-0ad5-4d8c-b84e-ebcbea326284/volumes" Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.959316 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerDied","Data":"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4"} Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.959345 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4fb98437-2f55-42d9-af85-9c89223c7279","Type":"ContainerDied","Data":"4166415c629b53c08fb47c68f93854bc491f1d6e5df2d6e040777a76dcbf5d36"} Sep 29 10:02:22 crc kubenswrapper[4991]: I0929 10:02:22.959366 4991 scope.go:117] "RemoveContainer" containerID="e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.008609 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.044452 4991 scope.go:117] "RemoveContainer" containerID="d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.057084 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.080938 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.108526 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: E0929 10:02:23.109131 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-api" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.109151 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-api" Sep 29 10:02:23 crc kubenswrapper[4991]: E0929 10:02:23.109181 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-log" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.109187 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-log" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.109429 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-log" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.109443 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" containerName="nova-api-api" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.113495 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.116759 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.117644 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.117702 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.135430 4991 scope.go:117] "RemoveContainer" containerID="e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4" Sep 29 10:02:23 crc kubenswrapper[4991]: E0929 10:02:23.142670 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4\": container with ID starting with e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4 not found: ID does not exist" containerID="e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.142719 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4"} err="failed to get container status \"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4\": rpc error: code = NotFound desc = could not find container \"e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4\": container with ID starting with e571fb8b4f97c7bd08cedb017dd7541ee74fcfc27bce345b68167be21960d9b4 not found: ID does not exist" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.142751 4991 scope.go:117] "RemoveContainer" containerID="d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.142864 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: E0929 10:02:23.143394 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69\": container with ID starting with d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69 not found: ID does not exist" containerID="d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.143429 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69"} err="failed to get container status \"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69\": rpc error: code = NotFound desc = could not find container \"d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69\": container with ID starting with d5be25f24dca61dbe15333502752e76cb420f6ac8dd46cc761f2afc3b6b70f69 not found: ID does not exist" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.174036 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: W0929 10:02:23.175287 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ad208c7_0fbb_48f9_9275_912bf20bea5a.slice/crio-b2c68d988228afa6a0597c488df7b910d56e1aa45edb95a55185711c3217fb37 WatchSource:0}: Error finding container b2c68d988228afa6a0597c488df7b910d56e1aa45edb95a55185711c3217fb37: Status 404 returned error can't find the container with id b2c68d988228afa6a0597c488df7b910d56e1aa45edb95a55185711c3217fb37 Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303002 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303209 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303273 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303347 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303394 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8k7c\" (UniqueName: \"kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.303449 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: W0929 10:02:23.327969 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b7115bc_655b_4f41_9983_ecd70758ac95.slice/crio-69112efa34406f52a270a0bf866f73dc540717dc6c680ddb41aaef806b427ff8 WatchSource:0}: Error finding container 69112efa34406f52a270a0bf866f73dc540717dc6c680ddb41aaef806b427ff8: Status 404 returned error can't find the container with id 69112efa34406f52a270a0bf866f73dc540717dc6c680ddb41aaef806b427ff8 Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.345100 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.355502 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8hjxg"] Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.356998 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.359534 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.359697 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.371022 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8hjxg"] Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.405940 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.406884 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.407177 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.407239 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.407579 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.407619 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8k7c\" (UniqueName: \"kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.407669 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.411498 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.414025 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.414307 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.415913 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.427505 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8k7c\" (UniqueName: \"kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c\") pod \"nova-api-0\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.506587 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.510442 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkzf\" (UniqueName: \"kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.510587 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.510906 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.511055 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.613563 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.613738 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.613759 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.613830 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkzf\" (UniqueName: \"kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.617124 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.617260 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.618932 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.639740 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkzf\" (UniqueName: \"kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf\") pod \"nova-cell1-cell-mapping-8hjxg\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.702631 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.987193 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerStarted","Data":"b2c68d988228afa6a0597c488df7b910d56e1aa45edb95a55185711c3217fb37"} Sep 29 10:02:23 crc kubenswrapper[4991]: I0929 10:02:23.988787 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerStarted","Data":"69112efa34406f52a270a0bf866f73dc540717dc6c680ddb41aaef806b427ff8"} Sep 29 10:02:24 crc kubenswrapper[4991]: I0929 10:02:24.090569 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:24 crc kubenswrapper[4991]: W0929 10:02:24.122342 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5008a8b_f9c1_48b6_a61d_a6d5c49dec7a.slice/crio-6041ca9fbf89deb7c63306c7d71c409a1c21ca439b4b8c4febd4205bb7fff19a WatchSource:0}: Error finding container 6041ca9fbf89deb7c63306c7d71c409a1c21ca439b4b8c4febd4205bb7fff19a: Status 404 returned error can't find the container with id 6041ca9fbf89deb7c63306c7d71c409a1c21ca439b4b8c4febd4205bb7fff19a Sep 29 10:02:24 crc kubenswrapper[4991]: I0929 10:02:24.314811 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8hjxg"] Sep 29 10:02:24 crc kubenswrapper[4991]: W0929 10:02:24.315588 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod175b02db_b4a0_495b_8b53_522794feaae1.slice/crio-17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f WatchSource:0}: Error finding container 17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f: Status 404 returned error can't find the container with id 17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f Sep 29 10:02:24 crc kubenswrapper[4991]: I0929 10:02:24.955645 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb98437-2f55-42d9-af85-9c89223c7279" path="/var/lib/kubelet/pods/4fb98437-2f55-42d9-af85-9c89223c7279/volumes" Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.013456 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerStarted","Data":"e8b9e8969b9defc348af1cd5f02ec15f733850832adb4589cbc98273a88d2029"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.020892 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerStarted","Data":"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.020939 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerStarted","Data":"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.021226 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerStarted","Data":"6041ca9fbf89deb7c63306c7d71c409a1c21ca439b4b8c4febd4205bb7fff19a"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.027398 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerStarted","Data":"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.047816 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8hjxg" event={"ID":"175b02db-b4a0-495b-8b53-522794feaae1","Type":"ContainerStarted","Data":"0a4cc460d53436743fe1d8594100dc04886e3fad010c5626c4da7862b4471a9d"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.047975 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8hjxg" event={"ID":"175b02db-b4a0-495b-8b53-522794feaae1","Type":"ContainerStarted","Data":"17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f"} Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.151402 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8hjxg" podStartSLOduration=2.151383825 podStartE2EDuration="2.151383825s" podCreationTimestamp="2025-09-29 10:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:25.083718719 +0000 UTC m=+1480.939646767" watchObservedRunningTime="2025-09-29 10:02:25.151383825 +0000 UTC m=+1481.007311843" Sep 29 10:02:25 crc kubenswrapper[4991]: I0929 10:02:25.164080 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.164057828 podStartE2EDuration="2.164057828s" podCreationTimestamp="2025-09-29 10:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:25.098795145 +0000 UTC m=+1480.954723173" watchObservedRunningTime="2025-09-29 10:02:25.164057828 +0000 UTC m=+1481.019985856" Sep 29 10:02:26 crc kubenswrapper[4991]: I0929 10:02:26.058330 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerStarted","Data":"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8"} Sep 29 10:02:26 crc kubenswrapper[4991]: I0929 10:02:26.060125 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerStarted","Data":"07bfba2fc80c5a0c01827472b0c5c31a608b9a3ba243a18ea28dc98e00923424"} Sep 29 10:02:26 crc kubenswrapper[4991]: I0929 10:02:26.410299 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:02:26 crc kubenswrapper[4991]: I0929 10:02:26.495474 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:02:26 crc kubenswrapper[4991]: I0929 10:02:26.495705 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="dnsmasq-dns" containerID="cri-o://e331abf6d6040bd342ea69da8808cc724b420128a2423add4567e36145835da1" gracePeriod=10 Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.091326 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerStarted","Data":"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940"} Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.097368 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerStarted","Data":"23288152fad3301460cbc7c60b3e7ce1dafa345da2f8b1e0f4ce52843603e3fe"} Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.155701 4991 generic.go:334] "Generic (PLEG): container finished" podID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerID="e331abf6d6040bd342ea69da8808cc724b420128a2423add4567e36145835da1" exitCode=0 Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.155758 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" event={"ID":"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0","Type":"ContainerDied","Data":"e331abf6d6040bd342ea69da8808cc724b420128a2423add4567e36145835da1"} Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.289181 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.434712 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.434981 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.435786 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.435927 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mrh2\" (UniqueName: \"kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.436170 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.436197 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config\") pod \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\" (UID: \"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0\") " Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.452173 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2" (OuterVolumeSpecName: "kube-api-access-8mrh2") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "kube-api-access-8mrh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.541872 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mrh2\" (UniqueName: \"kubernetes.io/projected/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-kube-api-access-8mrh2\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.632437 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.643974 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.670482 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.690152 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config" (OuterVolumeSpecName: "config") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.695065 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.704609 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" (UID: "8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.746174 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-config\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.746216 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.746235 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:27 crc kubenswrapper[4991]: I0929 10:02:27.746250 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.213442 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" event={"ID":"8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0","Type":"ContainerDied","Data":"c9c80948b918aed7a515ae57b9ef5fbe7e8f4f04e9debba67828554afac3fcc4"} Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.213706 4991 scope.go:117] "RemoveContainer" containerID="e331abf6d6040bd342ea69da8808cc724b420128a2423add4567e36145835da1" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.213897 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-hz6lm" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.228410 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerStarted","Data":"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c"} Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.241100 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerStarted","Data":"8b0a93b9b7a6c532a40d8d71aa280ee74f1edbc9065e159ace705196232c8f24"} Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.242145 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.265532 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.835704255 podStartE2EDuration="6.265512868s" podCreationTimestamp="2025-09-29 10:02:22 +0000 UTC" firstStartedPulling="2025-09-29 10:02:23.336221011 +0000 UTC m=+1479.192149039" lastFinishedPulling="2025-09-29 10:02:26.766029624 +0000 UTC m=+1482.621957652" observedRunningTime="2025-09-29 10:02:28.262354376 +0000 UTC m=+1484.118282404" watchObservedRunningTime="2025-09-29 10:02:28.265512868 +0000 UTC m=+1484.121440896" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.314868 4991 scope.go:117] "RemoveContainer" containerID="8bfd82ba0a59c654ada289b94c740fe9e31c0f28fb3c6aac1407dc74ec5554a2" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.320663 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.197216916 podStartE2EDuration="6.320642817s" podCreationTimestamp="2025-09-29 10:02:22 +0000 UTC" firstStartedPulling="2025-09-29 10:02:23.179972097 +0000 UTC m=+1479.035900125" lastFinishedPulling="2025-09-29 10:02:27.303397998 +0000 UTC m=+1483.159326026" observedRunningTime="2025-09-29 10:02:28.314454514 +0000 UTC m=+1484.170382542" watchObservedRunningTime="2025-09-29 10:02:28.320642817 +0000 UTC m=+1484.176570845" Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.382903 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.393609 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-hz6lm"] Sep 29 10:02:28 crc kubenswrapper[4991]: I0929 10:02:28.945266 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" path="/var/lib/kubelet/pods/8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0/volumes" Sep 29 10:02:31 crc kubenswrapper[4991]: I0929 10:02:31.298672 4991 generic.go:334] "Generic (PLEG): container finished" podID="175b02db-b4a0-495b-8b53-522794feaae1" containerID="0a4cc460d53436743fe1d8594100dc04886e3fad010c5626c4da7862b4471a9d" exitCode=0 Sep 29 10:02:31 crc kubenswrapper[4991]: I0929 10:02:31.298739 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8hjxg" event={"ID":"175b02db-b4a0-495b-8b53-522794feaae1","Type":"ContainerDied","Data":"0a4cc460d53436743fe1d8594100dc04886e3fad010c5626c4da7862b4471a9d"} Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.847799 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.974819 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle\") pod \"175b02db-b4a0-495b-8b53-522794feaae1\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.975169 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgkzf\" (UniqueName: \"kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf\") pod \"175b02db-b4a0-495b-8b53-522794feaae1\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.975393 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data\") pod \"175b02db-b4a0-495b-8b53-522794feaae1\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.975628 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts\") pod \"175b02db-b4a0-495b-8b53-522794feaae1\" (UID: \"175b02db-b4a0-495b-8b53-522794feaae1\") " Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.981432 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts" (OuterVolumeSpecName: "scripts") pod "175b02db-b4a0-495b-8b53-522794feaae1" (UID: "175b02db-b4a0-495b-8b53-522794feaae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:32 crc kubenswrapper[4991]: I0929 10:02:32.982032 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf" (OuterVolumeSpecName: "kube-api-access-hgkzf") pod "175b02db-b4a0-495b-8b53-522794feaae1" (UID: "175b02db-b4a0-495b-8b53-522794feaae1"). InnerVolumeSpecName "kube-api-access-hgkzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.008440 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "175b02db-b4a0-495b-8b53-522794feaae1" (UID: "175b02db-b4a0-495b-8b53-522794feaae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.008833 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data" (OuterVolumeSpecName: "config-data") pod "175b02db-b4a0-495b-8b53-522794feaae1" (UID: "175b02db-b4a0-495b-8b53-522794feaae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.079312 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.079358 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgkzf\" (UniqueName: \"kubernetes.io/projected/175b02db-b4a0-495b-8b53-522794feaae1-kube-api-access-hgkzf\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.079377 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.079388 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/175b02db-b4a0-495b-8b53-522794feaae1-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.322773 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8hjxg" event={"ID":"175b02db-b4a0-495b-8b53-522794feaae1","Type":"ContainerDied","Data":"17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f"} Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.323098 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17753e1e60de3d471c9293f182f100083f6b00d7260bb8995d6bf2bb9ad16e4f" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.323169 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8hjxg" Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.491281 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.491601 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-log" containerID="cri-o://2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" gracePeriod=30 Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.491689 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-api" containerID="cri-o://615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" gracePeriod=30 Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.523071 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.523437 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerName="nova-scheduler-scheduler" containerID="cri-o://e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" gracePeriod=30 Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.578088 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.578414 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" containerID="cri-o://a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff" gracePeriod=30 Sep 29 10:02:33 crc kubenswrapper[4991]: I0929 10:02:33.578779 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" containerID="cri-o://c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75" gracePeriod=30 Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.174055 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.202854 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.202994 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.203029 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8k7c\" (UniqueName: \"kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.203052 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.203220 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.203313 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data\") pod \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\" (UID: \"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a\") " Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.204971 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs" (OuterVolumeSpecName: "logs") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.217534 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c" (OuterVolumeSpecName: "kube-api-access-l8k7c") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "kube-api-access-l8k7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.251109 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data" (OuterVolumeSpecName: "config-data") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.256999 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.284107 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.303221 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.307111 4991 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.307154 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.307165 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.307174 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.307183 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8k7c\" (UniqueName: \"kubernetes.io/projected/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-kube-api-access-l8k7c\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.307720 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.310217 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.310263 4991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerName="nova-scheduler-scheduler" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.310351 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" (UID: "b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.337518 4991 generic.go:334] "Generic (PLEG): container finished" podID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerID="c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75" exitCode=143 Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.337572 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerDied","Data":"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75"} Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341322 4991 generic.go:334] "Generic (PLEG): container finished" podID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerID="615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" exitCode=0 Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341357 4991 generic.go:334] "Generic (PLEG): container finished" podID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerID="2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" exitCode=143 Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341380 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerDied","Data":"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f"} Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341408 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerDied","Data":"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664"} Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341422 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a","Type":"ContainerDied","Data":"6041ca9fbf89deb7c63306c7d71c409a1c21ca439b4b8c4febd4205bb7fff19a"} Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341439 4991 scope.go:117] "RemoveContainer" containerID="615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.341492 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.367044 4991 scope.go:117] "RemoveContainer" containerID="2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.395714 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.409007 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.409871 4991 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.411488 4991 scope.go:117] "RemoveContainer" containerID="615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.412187 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f\": container with ID starting with 615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f not found: ID does not exist" containerID="615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.412305 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f"} err="failed to get container status \"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f\": rpc error: code = NotFound desc = could not find container \"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f\": container with ID starting with 615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f not found: ID does not exist" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.412409 4991 scope.go:117] "RemoveContainer" containerID="2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.412813 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664\": container with ID starting with 2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664 not found: ID does not exist" containerID="2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.412846 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664"} err="failed to get container status \"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664\": rpc error: code = NotFound desc = could not find container \"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664\": container with ID starting with 2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664 not found: ID does not exist" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.412868 4991 scope.go:117] "RemoveContainer" containerID="615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.413133 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f"} err="failed to get container status \"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f\": rpc error: code = NotFound desc = could not find container \"615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f\": container with ID starting with 615d3409477727d177adcec5382a2ee1aa4a04e9a2844b9463c8120711bf6f6f not found: ID does not exist" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.413149 4991 scope.go:117] "RemoveContainer" containerID="2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.413368 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664"} err="failed to get container status \"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664\": rpc error: code = NotFound desc = could not find container \"2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664\": container with ID starting with 2b6ec2b1cbef4ef774267bb0c90ba9ab6be54d9ec5f43e3741f3c19f38f78664 not found: ID does not exist" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425191 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.425755 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-log" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425780 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-log" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.425798 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="dnsmasq-dns" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425806 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="dnsmasq-dns" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.425853 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175b02db-b4a0-495b-8b53-522794feaae1" containerName="nova-manage" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425863 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="175b02db-b4a0-495b-8b53-522794feaae1" containerName="nova-manage" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.425880 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="init" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425889 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="init" Sep 29 10:02:34 crc kubenswrapper[4991]: E0929 10:02:34.425905 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-api" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.425912 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-api" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.426179 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="175b02db-b4a0-495b-8b53-522794feaae1" containerName="nova-manage" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.426210 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6e8d9f-4b67-44a5-8bc9-5a549a9f9ed0" containerName="dnsmasq-dns" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.426241 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-api" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.426258 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" containerName="nova-api-log" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.427712 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.431336 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.431518 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.431636 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.441047 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512108 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-public-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512494 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-config-data\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512612 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45mv\" (UniqueName: \"kubernetes.io/projected/103ec465-3307-403d-ad9a-1a69ebc64398-kube-api-access-t45mv\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512758 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103ec465-3307-403d-ad9a-1a69ebc64398-logs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512846 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-internal-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.512975 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614528 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-public-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614593 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-config-data\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614635 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45mv\" (UniqueName: \"kubernetes.io/projected/103ec465-3307-403d-ad9a-1a69ebc64398-kube-api-access-t45mv\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614713 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103ec465-3307-403d-ad9a-1a69ebc64398-logs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614733 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-internal-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.614796 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.616148 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103ec465-3307-403d-ad9a-1a69ebc64398-logs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.619658 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.619688 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-config-data\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.620050 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-internal-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.620810 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/103ec465-3307-403d-ad9a-1a69ebc64398-public-tls-certs\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.636023 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45mv\" (UniqueName: \"kubernetes.io/projected/103ec465-3307-403d-ad9a-1a69ebc64398-kube-api-access-t45mv\") pod \"nova-api-0\" (UID: \"103ec465-3307-403d-ad9a-1a69ebc64398\") " pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.754770 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 29 10:02:34 crc kubenswrapper[4991]: I0929 10:02:34.946690 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a" path="/var/lib/kubelet/pods/b5008a8b-f9c1-48b6-a61d-a6d5c49dec7a/volumes" Sep 29 10:02:35 crc kubenswrapper[4991]: I0929 10:02:35.279543 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 29 10:02:35 crc kubenswrapper[4991]: W0929 10:02:35.284457 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod103ec465_3307_403d_ad9a_1a69ebc64398.slice/crio-979d3fafbe7d7b71217114dadf44fee8556c10359e0a335a5db804c0d993eaa9 WatchSource:0}: Error finding container 979d3fafbe7d7b71217114dadf44fee8556c10359e0a335a5db804c0d993eaa9: Status 404 returned error can't find the container with id 979d3fafbe7d7b71217114dadf44fee8556c10359e0a335a5db804c0d993eaa9 Sep 29 10:02:35 crc kubenswrapper[4991]: I0929 10:02:35.352163 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"103ec465-3307-403d-ad9a-1a69ebc64398","Type":"ContainerStarted","Data":"979d3fafbe7d7b71217114dadf44fee8556c10359e0a335a5db804c0d993eaa9"} Sep 29 10:02:36 crc kubenswrapper[4991]: I0929 10:02:36.366560 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"103ec465-3307-403d-ad9a-1a69ebc64398","Type":"ContainerStarted","Data":"8e18eb21392de562085b5e94fe0bf14b28ace3f1c1f7f94b2323be429763ead8"} Sep 29 10:02:36 crc kubenswrapper[4991]: I0929 10:02:36.367134 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"103ec465-3307-403d-ad9a-1a69ebc64398","Type":"ContainerStarted","Data":"e88430e2911a252097a18f92707db563f5bf55d5410eb1d0d76a43ad8a814c3b"} Sep 29 10:02:36 crc kubenswrapper[4991]: I0929 10:02:36.726407 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": dial tcp 10.217.0.248:8775: connect: connection refused" Sep 29 10:02:36 crc kubenswrapper[4991]: I0929 10:02:36.726577 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": dial tcp 10.217.0.248:8775: connect: connection refused" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.378515 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.378578 4991 generic.go:334] "Generic (PLEG): container finished" podID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerID="a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff" exitCode=0 Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.378604 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerDied","Data":"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff"} Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.379027 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ddae0d54-721f-4bb5-9229-bb912c75d735","Type":"ContainerDied","Data":"f41e5a73834ac7701790c0a880207a52e33c80f37837bf604f7171b0ee2cd0e9"} Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.379065 4991 scope.go:117] "RemoveContainer" containerID="a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.404570 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.404545156 podStartE2EDuration="3.404545156s" podCreationTimestamp="2025-09-29 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:36.386405473 +0000 UTC m=+1492.242333511" watchObservedRunningTime="2025-09-29 10:02:37.404545156 +0000 UTC m=+1493.260473184" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.405632 4991 scope.go:117] "RemoveContainer" containerID="c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.457975 4991 scope.go:117] "RemoveContainer" containerID="a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff" Sep 29 10:02:37 crc kubenswrapper[4991]: E0929 10:02:37.458417 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff\": container with ID starting with a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff not found: ID does not exist" containerID="a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.458457 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff"} err="failed to get container status \"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff\": rpc error: code = NotFound desc = could not find container \"a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff\": container with ID starting with a3ecec40ec0684b7d18b0d6e1e1a6f271abe9a95e9ef150a4ff89260e74689ff not found: ID does not exist" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.458487 4991 scope.go:117] "RemoveContainer" containerID="c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75" Sep 29 10:02:37 crc kubenswrapper[4991]: E0929 10:02:37.458905 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75\": container with ID starting with c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75 not found: ID does not exist" containerID="c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.458934 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75"} err="failed to get container status \"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75\": rpc error: code = NotFound desc = could not find container \"c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75\": container with ID starting with c04646eb6e874e93faae8ff311310f0ec845f589e77c3fd590631264070c2c75 not found: ID does not exist" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.475798 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9f4h\" (UniqueName: \"kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h\") pod \"ddae0d54-721f-4bb5-9229-bb912c75d735\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.475881 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle\") pod \"ddae0d54-721f-4bb5-9229-bb912c75d735\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.475938 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data\") pod \"ddae0d54-721f-4bb5-9229-bb912c75d735\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.476359 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs\") pod \"ddae0d54-721f-4bb5-9229-bb912c75d735\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.476428 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs\") pod \"ddae0d54-721f-4bb5-9229-bb912c75d735\" (UID: \"ddae0d54-721f-4bb5-9229-bb912c75d735\") " Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.477373 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs" (OuterVolumeSpecName: "logs") pod "ddae0d54-721f-4bb5-9229-bb912c75d735" (UID: "ddae0d54-721f-4bb5-9229-bb912c75d735"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.493325 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h" (OuterVolumeSpecName: "kube-api-access-h9f4h") pod "ddae0d54-721f-4bb5-9229-bb912c75d735" (UID: "ddae0d54-721f-4bb5-9229-bb912c75d735"). InnerVolumeSpecName "kube-api-access-h9f4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.529056 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddae0d54-721f-4bb5-9229-bb912c75d735" (UID: "ddae0d54-721f-4bb5-9229-bb912c75d735"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.529662 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data" (OuterVolumeSpecName: "config-data") pod "ddae0d54-721f-4bb5-9229-bb912c75d735" (UID: "ddae0d54-721f-4bb5-9229-bb912c75d735"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.566458 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ddae0d54-721f-4bb5-9229-bb912c75d735" (UID: "ddae0d54-721f-4bb5-9229-bb912c75d735"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.579434 4991 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ddae0d54-721f-4bb5-9229-bb912c75d735-logs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.579475 4991 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.579487 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9f4h\" (UniqueName: \"kubernetes.io/projected/ddae0d54-721f-4bb5-9229-bb912c75d735-kube-api-access-h9f4h\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.579496 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:37 crc kubenswrapper[4991]: I0929 10:02:37.579506 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddae0d54-721f-4bb5-9229-bb912c75d735-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.392139 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.393661 4991 generic.go:334] "Generic (PLEG): container finished" podID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerID="e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" exitCode=0 Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.393707 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc8022-4980-4498-b5dd-26c32bf05ea1","Type":"ContainerDied","Data":"e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66"} Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.536963 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.562358 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.576033 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:38 crc kubenswrapper[4991]: E0929 10:02:38.576571 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.576583 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" Sep 29 10:02:38 crc kubenswrapper[4991]: E0929 10:02:38.576604 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.576610 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.576804 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-log" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.576839 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" containerName="nova-metadata-metadata" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.578163 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.580413 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.581676 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.588297 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.610581 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.610888 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-logs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.610924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-config-data\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.611033 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv849\" (UniqueName: \"kubernetes.io/projected/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-kube-api-access-tv849\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.611068 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.713722 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.713969 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-logs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.714020 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-config-data\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.714154 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv849\" (UniqueName: \"kubernetes.io/projected/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-kube-api-access-tv849\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.714212 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.714764 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-logs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.725851 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.725893 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.731682 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-config-data\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.733374 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv849\" (UniqueName: \"kubernetes.io/projected/d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690-kube-api-access-tv849\") pod \"nova-metadata-0\" (UID: \"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690\") " pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.900360 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.901845 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 29 10:02:38 crc kubenswrapper[4991]: I0929 10:02:38.945064 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddae0d54-721f-4bb5-9229-bb912c75d735" path="/var/lib/kubelet/pods/ddae0d54-721f-4bb5-9229-bb912c75d735/volumes" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.020239 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg5gg\" (UniqueName: \"kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg\") pod \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.020599 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle\") pod \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.020689 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data\") pod \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\" (UID: \"1dfc8022-4980-4498-b5dd-26c32bf05ea1\") " Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.033265 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg" (OuterVolumeSpecName: "kube-api-access-gg5gg") pod "1dfc8022-4980-4498-b5dd-26c32bf05ea1" (UID: "1dfc8022-4980-4498-b5dd-26c32bf05ea1"). InnerVolumeSpecName "kube-api-access-gg5gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.086445 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data" (OuterVolumeSpecName: "config-data") pod "1dfc8022-4980-4498-b5dd-26c32bf05ea1" (UID: "1dfc8022-4980-4498-b5dd-26c32bf05ea1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.086916 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dfc8022-4980-4498-b5dd-26c32bf05ea1" (UID: "1dfc8022-4980-4498-b5dd-26c32bf05ea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.125247 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg5gg\" (UniqueName: \"kubernetes.io/projected/1dfc8022-4980-4498-b5dd-26c32bf05ea1-kube-api-access-gg5gg\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.125277 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.125286 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dfc8022-4980-4498-b5dd-26c32bf05ea1-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.413505 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1dfc8022-4980-4498-b5dd-26c32bf05ea1","Type":"ContainerDied","Data":"c16490b4910476680a2d563424849ff30a0defb6fb29c3bcd67e95f5e10474ad"} Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.413753 4991 scope.go:117] "RemoveContainer" containerID="e377af5808a972d33a04e88d5df48da2dad15a0c1ecfb0c65923be885b50ce66" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.413608 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.415692 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.458509 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.473197 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.493130 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:39 crc kubenswrapper[4991]: E0929 10:02:39.493926 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerName="nova-scheduler-scheduler" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.493978 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerName="nova-scheduler-scheduler" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.494453 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" containerName="nova-scheduler-scheduler" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.495847 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.500041 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.521228 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.533016 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwfks\" (UniqueName: \"kubernetes.io/projected/593ca329-b7a2-400c-8910-7f37bd321c6c-kube-api-access-wwfks\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.533116 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-config-data\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.533890 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.636502 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwfks\" (UniqueName: \"kubernetes.io/projected/593ca329-b7a2-400c-8910-7f37bd321c6c-kube-api-access-wwfks\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.636630 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-config-data\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.636662 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.643470 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.643476 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593ca329-b7a2-400c-8910-7f37bd321c6c-config-data\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.654407 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwfks\" (UniqueName: \"kubernetes.io/projected/593ca329-b7a2-400c-8910-7f37bd321c6c-kube-api-access-wwfks\") pod \"nova-scheduler-0\" (UID: \"593ca329-b7a2-400c-8910-7f37bd321c6c\") " pod="openstack/nova-scheduler-0" Sep 29 10:02:39 crc kubenswrapper[4991]: I0929 10:02:39.821224 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 29 10:02:40 crc kubenswrapper[4991]: W0929 10:02:40.307237 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod593ca329_b7a2_400c_8910_7f37bd321c6c.slice/crio-e8f5e0992fbd48b6b970ffa43d6be5cc24712c61e4c8fc7c25a2bfc131610b96 WatchSource:0}: Error finding container e8f5e0992fbd48b6b970ffa43d6be5cc24712c61e4c8fc7c25a2bfc131610b96: Status 404 returned error can't find the container with id e8f5e0992fbd48b6b970ffa43d6be5cc24712c61e4c8fc7c25a2bfc131610b96 Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.310571 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.433020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"593ca329-b7a2-400c-8910-7f37bd321c6c","Type":"ContainerStarted","Data":"e8f5e0992fbd48b6b970ffa43d6be5cc24712c61e4c8fc7c25a2bfc131610b96"} Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.435285 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690","Type":"ContainerStarted","Data":"15b590dcdeb422135ce5dee58ca6d5472a68560255d1d906a85470501d5ded5d"} Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.435327 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690","Type":"ContainerStarted","Data":"01250e262d8a92b894db9bc6f9b4a6be5c589163f8ca287232f6d82c037765bd"} Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.435341 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690","Type":"ContainerStarted","Data":"1b99e60c88800594c7c9f0798d6a3cc3f4901949a022e8046ca2bbd05ac5ab4e"} Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.471111 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.471075668 podStartE2EDuration="2.471075668s" podCreationTimestamp="2025-09-29 10:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:40.456089114 +0000 UTC m=+1496.312017142" watchObservedRunningTime="2025-09-29 10:02:40.471075668 +0000 UTC m=+1496.327003696" Sep 29 10:02:40 crc kubenswrapper[4991]: I0929 10:02:40.938733 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfc8022-4980-4498-b5dd-26c32bf05ea1" path="/var/lib/kubelet/pods/1dfc8022-4980-4498-b5dd-26c32bf05ea1/volumes" Sep 29 10:02:41 crc kubenswrapper[4991]: I0929 10:02:41.446928 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"593ca329-b7a2-400c-8910-7f37bd321c6c","Type":"ContainerStarted","Data":"401fab5d45d57b8d82ddbc1329e4071f3e4760684fdad82ec09fa0ba9b9be908"} Sep 29 10:02:41 crc kubenswrapper[4991]: I0929 10:02:41.475771 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.475750476 podStartE2EDuration="2.475750476s" podCreationTimestamp="2025-09-29 10:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:02:41.466227946 +0000 UTC m=+1497.322155974" watchObservedRunningTime="2025-09-29 10:02:41.475750476 +0000 UTC m=+1497.331678494" Sep 29 10:02:43 crc kubenswrapper[4991]: I0929 10:02:43.902547 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:02:43 crc kubenswrapper[4991]: I0929 10:02:43.903103 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 29 10:02:44 crc kubenswrapper[4991]: I0929 10:02:44.755519 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:02:44 crc kubenswrapper[4991]: I0929 10:02:44.755595 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 29 10:02:44 crc kubenswrapper[4991]: I0929 10:02:44.821921 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 29 10:02:45 crc kubenswrapper[4991]: I0929 10:02:45.772259 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="103ec465-3307-403d-ad9a-1a69ebc64398" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:45 crc kubenswrapper[4991]: I0929 10:02:45.772291 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="103ec465-3307-403d-ad9a-1a69ebc64398" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:48 crc kubenswrapper[4991]: I0929 10:02:48.902553 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 29 10:02:48 crc kubenswrapper[4991]: I0929 10:02:48.903137 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 29 10:02:49 crc kubenswrapper[4991]: I0929 10:02:49.822576 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 29 10:02:49 crc kubenswrapper[4991]: I0929 10:02:49.857195 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 29 10:02:49 crc kubenswrapper[4991]: I0929 10:02:49.915155 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:49 crc kubenswrapper[4991]: I0929 10:02:49.915149 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 29 10:02:50 crc kubenswrapper[4991]: I0929 10:02:50.598321 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 29 10:02:52 crc kubenswrapper[4991]: I0929 10:02:52.599677 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 29 10:02:54 crc kubenswrapper[4991]: I0929 10:02:54.761171 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 29 10:02:54 crc kubenswrapper[4991]: I0929 10:02:54.762371 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 29 10:02:54 crc kubenswrapper[4991]: I0929 10:02:54.765581 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 29 10:02:54 crc kubenswrapper[4991]: I0929 10:02:54.773495 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 29 10:02:55 crc kubenswrapper[4991]: I0929 10:02:55.630273 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 29 10:02:55 crc kubenswrapper[4991]: I0929 10:02:55.640656 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 29 10:02:56 crc kubenswrapper[4991]: I0929 10:02:56.942587 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:02:56 crc kubenswrapper[4991]: I0929 10:02:56.942809 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3c45d26d-026d-4f19-88bb-6d1769f32363" containerName="kube-state-metrics" containerID="cri-o://87394e143dd7a24f7dc7cd6be5e439d335ea31070a16464add70e47e5effa13a" gracePeriod=30 Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.028313 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.028849 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="02856571-708d-4e22-a2cb-9b411d5a12c0" containerName="mysqld-exporter" containerID="cri-o://e9e141d9831e2558c4b1d523da2617ef2995263cb01cc6c466cbada9feb0100d" gracePeriod=30 Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.668261 4991 generic.go:334] "Generic (PLEG): container finished" podID="02856571-708d-4e22-a2cb-9b411d5a12c0" containerID="e9e141d9831e2558c4b1d523da2617ef2995263cb01cc6c466cbada9feb0100d" exitCode=2 Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.668393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"02856571-708d-4e22-a2cb-9b411d5a12c0","Type":"ContainerDied","Data":"e9e141d9831e2558c4b1d523da2617ef2995263cb01cc6c466cbada9feb0100d"} Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.671104 4991 generic.go:334] "Generic (PLEG): container finished" podID="3c45d26d-026d-4f19-88bb-6d1769f32363" containerID="87394e143dd7a24f7dc7cd6be5e439d335ea31070a16464add70e47e5effa13a" exitCode=2 Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.671211 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c45d26d-026d-4f19-88bb-6d1769f32363","Type":"ContainerDied","Data":"87394e143dd7a24f7dc7cd6be5e439d335ea31070a16464add70e47e5effa13a"} Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.938005 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 10:02:57 crc kubenswrapper[4991]: I0929 10:02:57.948215 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.124397 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data\") pod \"02856571-708d-4e22-a2cb-9b411d5a12c0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.124460 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hd9\" (UniqueName: \"kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9\") pod \"3c45d26d-026d-4f19-88bb-6d1769f32363\" (UID: \"3c45d26d-026d-4f19-88bb-6d1769f32363\") " Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.124625 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmssl\" (UniqueName: \"kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl\") pod \"02856571-708d-4e22-a2cb-9b411d5a12c0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.124794 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle\") pod \"02856571-708d-4e22-a2cb-9b411d5a12c0\" (UID: \"02856571-708d-4e22-a2cb-9b411d5a12c0\") " Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.132542 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl" (OuterVolumeSpecName: "kube-api-access-hmssl") pod "02856571-708d-4e22-a2cb-9b411d5a12c0" (UID: "02856571-708d-4e22-a2cb-9b411d5a12c0"). InnerVolumeSpecName "kube-api-access-hmssl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.233942 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmssl\" (UniqueName: \"kubernetes.io/projected/02856571-708d-4e22-a2cb-9b411d5a12c0-kube-api-access-hmssl\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.270932 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9" (OuterVolumeSpecName: "kube-api-access-w6hd9") pod "3c45d26d-026d-4f19-88bb-6d1769f32363" (UID: "3c45d26d-026d-4f19-88bb-6d1769f32363"). InnerVolumeSpecName "kube-api-access-w6hd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.281059 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data" (OuterVolumeSpecName: "config-data") pod "02856571-708d-4e22-a2cb-9b411d5a12c0" (UID: "02856571-708d-4e22-a2cb-9b411d5a12c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.281305 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02856571-708d-4e22-a2cb-9b411d5a12c0" (UID: "02856571-708d-4e22-a2cb-9b411d5a12c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.336970 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.337327 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6hd9\" (UniqueName: \"kubernetes.io/projected/3c45d26d-026d-4f19-88bb-6d1769f32363-kube-api-access-w6hd9\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.337346 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02856571-708d-4e22-a2cb-9b411d5a12c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.683557 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"02856571-708d-4e22-a2cb-9b411d5a12c0","Type":"ContainerDied","Data":"6b1fe8c0ced435082ca5a1b2b877c9966acd61675c14c250566daee0b8b8e8c5"} Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.683620 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.683652 4991 scope.go:117] "RemoveContainer" containerID="e9e141d9831e2558c4b1d523da2617ef2995263cb01cc6c466cbada9feb0100d" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.685421 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c45d26d-026d-4f19-88bb-6d1769f32363","Type":"ContainerDied","Data":"c2d21888ac18b9d8af9741c0dcdce2658d3f79ccf1ac0568a51eebed3a361bcb"} Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.685605 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.733790 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.750488 4991 scope.go:117] "RemoveContainer" containerID="87394e143dd7a24f7dc7cd6be5e439d335ea31070a16464add70e47e5effa13a" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.767898 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.782477 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.796347 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.808434 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: E0929 10:02:58.809022 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02856571-708d-4e22-a2cb-9b411d5a12c0" containerName="mysqld-exporter" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.809044 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="02856571-708d-4e22-a2cb-9b411d5a12c0" containerName="mysqld-exporter" Sep 29 10:02:58 crc kubenswrapper[4991]: E0929 10:02:58.809095 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c45d26d-026d-4f19-88bb-6d1769f32363" containerName="kube-state-metrics" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.809103 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c45d26d-026d-4f19-88bb-6d1769f32363" containerName="kube-state-metrics" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.809378 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="02856571-708d-4e22-a2cb-9b411d5a12c0" containerName="mysqld-exporter" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.809398 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c45d26d-026d-4f19-88bb-6d1769f32363" containerName="kube-state-metrics" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.810187 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.812640 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.816287 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.818589 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.820115 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.822732 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.822781 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.833310 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.848305 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.864853 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865037 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzcx\" (UniqueName: \"kubernetes.io/projected/8c49db59-0819-4c3d-922d-04d10e579618-kube-api-access-4zzcx\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865147 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9wx\" (UniqueName: \"kubernetes.io/projected/13623a3a-f409-4086-acc7-af839f51a45b-kube-api-access-pk9wx\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865198 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865243 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865310 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865357 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-config-data\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.865414 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.909560 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.910867 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.914245 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.942482 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02856571-708d-4e22-a2cb-9b411d5a12c0" path="/var/lib/kubelet/pods/02856571-708d-4e22-a2cb-9b411d5a12c0/volumes" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.942996 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c45d26d-026d-4f19-88bb-6d1769f32363" path="/var/lib/kubelet/pods/3c45d26d-026d-4f19-88bb-6d1769f32363/volumes" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.969515 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.969577 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-config-data\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.969661 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.969761 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.969837 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzcx\" (UniqueName: \"kubernetes.io/projected/8c49db59-0819-4c3d-922d-04d10e579618-kube-api-access-4zzcx\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.970077 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9wx\" (UniqueName: \"kubernetes.io/projected/13623a3a-f409-4086-acc7-af839f51a45b-kube-api-access-pk9wx\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.970120 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.970186 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.985333 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-config-data\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.985688 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.985704 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.986011 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.986486 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c49db59-0819-4c3d-922d-04d10e579618-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.991725 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13623a3a-f409-4086-acc7-af839f51a45b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:58 crc kubenswrapper[4991]: I0929 10:02:58.994068 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzcx\" (UniqueName: \"kubernetes.io/projected/8c49db59-0819-4c3d-922d-04d10e579618-kube-api-access-4zzcx\") pod \"kube-state-metrics-0\" (UID: \"8c49db59-0819-4c3d-922d-04d10e579618\") " pod="openstack/kube-state-metrics-0" Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.002163 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9wx\" (UniqueName: \"kubernetes.io/projected/13623a3a-f409-4086-acc7-af839f51a45b-kube-api-access-pk9wx\") pod \"mysqld-exporter-0\" (UID: \"13623a3a-f409-4086-acc7-af839f51a45b\") " pod="openstack/mysqld-exporter-0" Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.133515 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.145880 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.707723 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.736701 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.737071 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-central-agent" containerID="cri-o://e8b9e8969b9defc348af1cd5f02ec15f733850832adb4589cbc98273a88d2029" gracePeriod=30 Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.737587 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="proxy-httpd" containerID="cri-o://8b0a93b9b7a6c532a40d8d71aa280ee74f1edbc9065e159ace705196232c8f24" gracePeriod=30 Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.737640 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="sg-core" containerID="cri-o://23288152fad3301460cbc7c60b3e7ce1dafa345da2f8b1e0f4ce52843603e3fe" gracePeriod=30 Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.737675 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-notification-agent" containerID="cri-o://07bfba2fc80c5a0c01827472b0c5c31a608b9a3ba243a18ea28dc98e00923424" gracePeriod=30 Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.806453 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Sep 29 10:02:59 crc kubenswrapper[4991]: I0929 10:02:59.871936 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715380 4991 generic.go:334] "Generic (PLEG): container finished" podID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerID="8b0a93b9b7a6c532a40d8d71aa280ee74f1edbc9065e159ace705196232c8f24" exitCode=0 Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715701 4991 generic.go:334] "Generic (PLEG): container finished" podID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerID="23288152fad3301460cbc7c60b3e7ce1dafa345da2f8b1e0f4ce52843603e3fe" exitCode=2 Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715728 4991 generic.go:334] "Generic (PLEG): container finished" podID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerID="e8b9e8969b9defc348af1cd5f02ec15f733850832adb4589cbc98273a88d2029" exitCode=0 Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715775 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerDied","Data":"8b0a93b9b7a6c532a40d8d71aa280ee74f1edbc9065e159ace705196232c8f24"} Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715807 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerDied","Data":"23288152fad3301460cbc7c60b3e7ce1dafa345da2f8b1e0f4ce52843603e3fe"} Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.715821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerDied","Data":"e8b9e8969b9defc348af1cd5f02ec15f733850832adb4589cbc98273a88d2029"} Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.719085 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"13623a3a-f409-4086-acc7-af839f51a45b","Type":"ContainerStarted","Data":"d817d83b09c6aed2a58c7912d4c41207f356472bbe6463cb925dc880c561bad3"} Sep 29 10:03:00 crc kubenswrapper[4991]: I0929 10:03:00.721441 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8c49db59-0819-4c3d-922d-04d10e579618","Type":"ContainerStarted","Data":"561fadd0ac467796605d1643652facd22d112c838e5930a04ad5d3448a4b7511"} Sep 29 10:03:02 crc kubenswrapper[4991]: I0929 10:03:02.747533 4991 generic.go:334] "Generic (PLEG): container finished" podID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerID="07bfba2fc80c5a0c01827472b0c5c31a608b9a3ba243a18ea28dc98e00923424" exitCode=0 Sep 29 10:03:02 crc kubenswrapper[4991]: I0929 10:03:02.747866 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerDied","Data":"07bfba2fc80c5a0c01827472b0c5c31a608b9a3ba243a18ea28dc98e00923424"} Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.730228 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.775726 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ad208c7-0fbb-48f9-9275-912bf20bea5a","Type":"ContainerDied","Data":"b2c68d988228afa6a0597c488df7b910d56e1aa45edb95a55185711c3217fb37"} Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.775776 4991 scope.go:117] "RemoveContainer" containerID="8b0a93b9b7a6c532a40d8d71aa280ee74f1edbc9065e159ace705196232c8f24" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.776007 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806374 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806556 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806628 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806675 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806716 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806762 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmxkn\" (UniqueName: \"kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.806937 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data\") pod \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\" (UID: \"5ad208c7-0fbb-48f9-9275-912bf20bea5a\") " Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.807547 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.808401 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.808971 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.813623 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts" (OuterVolumeSpecName: "scripts") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.834143 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn" (OuterVolumeSpecName: "kube-api-access-jmxkn") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "kube-api-access-jmxkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.861221 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.899202 4991 scope.go:117] "RemoveContainer" containerID="23288152fad3301460cbc7c60b3e7ce1dafa345da2f8b1e0f4ce52843603e3fe" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.915470 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ad208c7-0fbb-48f9-9275-912bf20bea5a-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.915506 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.915519 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.915530 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmxkn\" (UniqueName: \"kubernetes.io/projected/5ad208c7-0fbb-48f9-9275-912bf20bea5a-kube-api-access-jmxkn\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:03 crc kubenswrapper[4991]: I0929 10:03:03.933175 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.005665 4991 scope.go:117] "RemoveContainer" containerID="07bfba2fc80c5a0c01827472b0c5c31a608b9a3ba243a18ea28dc98e00923424" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.022798 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.042096 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data" (OuterVolumeSpecName: "config-data") pod "5ad208c7-0fbb-48f9-9275-912bf20bea5a" (UID: "5ad208c7-0fbb-48f9-9275-912bf20bea5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.099576 4991 scope.go:117] "RemoveContainer" containerID="e8b9e8969b9defc348af1cd5f02ec15f733850832adb4589cbc98273a88d2029" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.130027 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad208c7-0fbb-48f9-9275-912bf20bea5a-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.147344 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.166545 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.196135 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:04 crc kubenswrapper[4991]: E0929 10:03:04.196698 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-notification-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.196718 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-notification-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: E0929 10:03:04.196740 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="sg-core" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.196747 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="sg-core" Sep 29 10:03:04 crc kubenswrapper[4991]: E0929 10:03:04.196780 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="proxy-httpd" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.196787 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="proxy-httpd" Sep 29 10:03:04 crc kubenswrapper[4991]: E0929 10:03:04.196810 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-central-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.196818 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-central-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.197091 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-central-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.197116 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="proxy-httpd" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.197132 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="sg-core" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.197149 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" containerName="ceilometer-notification-agent" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.200682 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.205218 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.205305 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.205526 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.215104 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.232593 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.232641 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.232759 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.232807 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.233151 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.233304 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86gw\" (UniqueName: \"kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.233461 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.233585 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336021 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336084 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336155 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336202 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336304 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336363 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n86gw\" (UniqueName: \"kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336429 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.336477 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.337554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.337571 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.344789 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.344809 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.345258 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.345368 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.346619 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.356707 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n86gw\" (UniqueName: \"kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw\") pod \"ceilometer-0\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.540534 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.795640 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8c49db59-0819-4c3d-922d-04d10e579618","Type":"ContainerStarted","Data":"098a379d884849faef43dbf02a0e76cd1fc9df51d3d20a5e72a448847c9bf94c"} Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.796327 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.804094 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"13623a3a-f409-4086-acc7-af839f51a45b","Type":"ContainerStarted","Data":"b8514ce7abb81087cecdc9ad0e693c0556c2d673c354656884df57eceae76816"} Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.818518 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.115201013 podStartE2EDuration="6.818470679s" podCreationTimestamp="2025-09-29 10:02:58 +0000 UTC" firstStartedPulling="2025-09-29 10:02:59.917473305 +0000 UTC m=+1515.773401333" lastFinishedPulling="2025-09-29 10:03:03.620742971 +0000 UTC m=+1519.476670999" observedRunningTime="2025-09-29 10:03:04.809902784 +0000 UTC m=+1520.665830822" watchObservedRunningTime="2025-09-29 10:03:04.818470679 +0000 UTC m=+1520.674398707" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.844679 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.781851588 podStartE2EDuration="6.844644277s" podCreationTimestamp="2025-09-29 10:02:58 +0000 UTC" firstStartedPulling="2025-09-29 10:02:59.836395416 +0000 UTC m=+1515.692323444" lastFinishedPulling="2025-09-29 10:03:03.899188105 +0000 UTC m=+1519.755116133" observedRunningTime="2025-09-29 10:03:04.829348115 +0000 UTC m=+1520.685276143" watchObservedRunningTime="2025-09-29 10:03:04.844644277 +0000 UTC m=+1520.700572305" Sep 29 10:03:04 crc kubenswrapper[4991]: I0929 10:03:04.954135 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad208c7-0fbb-48f9-9275-912bf20bea5a" path="/var/lib/kubelet/pods/5ad208c7-0fbb-48f9-9275-912bf20bea5a/volumes" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.056648 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.197868 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.200439 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.211765 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.260006 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzxq\" (UniqueName: \"kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.260054 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.260111 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.363140 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.363389 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzxq\" (UniqueName: \"kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.363422 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.363830 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.363858 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.384403 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzxq\" (UniqueName: \"kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq\") pod \"certified-operators-4m7fw\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.537339 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:05 crc kubenswrapper[4991]: I0929 10:03:05.822629 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerStarted","Data":"06f9e1f0151fcd0bb9a858bc4c3ccc67f2afe4fd24f832e17d345bd8bd46c30a"} Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.066674 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.693598 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.836390 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerStarted","Data":"78746e6e727964338536dc7705071ca712543b44e2256e04eedc9cdeb51af57f"} Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.838473 4991 generic.go:334] "Generic (PLEG): container finished" podID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerID="6da4bc292359c519e1d94bcd923c8ed862ddccce04f853ac18f460cb85dc87ca" exitCode=0 Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.838504 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerDied","Data":"6da4bc292359c519e1d94bcd923c8ed862ddccce04f853ac18f460cb85dc87ca"} Sep 29 10:03:06 crc kubenswrapper[4991]: I0929 10:03:06.838746 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerStarted","Data":"d83510cdfdffb41ae61cdb881e19ada6590ac06c6babb570de46fdc33cbad87f"} Sep 29 10:03:08 crc kubenswrapper[4991]: I0929 10:03:08.860994 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerStarted","Data":"ac796719d9674ff577e66d1332bf565b3bae09102d73902a42b9fc5e35ffe50f"} Sep 29 10:03:09 crc kubenswrapper[4991]: I0929 10:03:09.185162 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 29 10:03:09 crc kubenswrapper[4991]: I0929 10:03:09.876022 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerStarted","Data":"ae978d0bc29f33665facfcd7f25bfbcefca3d26e1fec6efd4deb68c631a0569f"} Sep 29 10:03:10 crc kubenswrapper[4991]: I0929 10:03:10.887382 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerStarted","Data":"70e7cc553f9d1d6aa33627d267676749a9c5545138fb3e593179a1e9e7c7067a"} Sep 29 10:03:12 crc kubenswrapper[4991]: I0929 10:03:12.940165 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerStarted","Data":"fdfbf1aa7940fdd41ed9b4348285661b79538419e051f307ed1f019ee7b4b608"} Sep 29 10:03:12 crc kubenswrapper[4991]: I0929 10:03:12.941559 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:03:12 crc kubenswrapper[4991]: I0929 10:03:12.958486 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.828486907 podStartE2EDuration="8.958462265s" podCreationTimestamp="2025-09-29 10:03:04 +0000 UTC" firstStartedPulling="2025-09-29 10:03:05.059160031 +0000 UTC m=+1520.915088059" lastFinishedPulling="2025-09-29 10:03:12.189135389 +0000 UTC m=+1528.045063417" observedRunningTime="2025-09-29 10:03:12.95560217 +0000 UTC m=+1528.811530218" watchObservedRunningTime="2025-09-29 10:03:12.958462265 +0000 UTC m=+1528.814390283" Sep 29 10:03:13 crc kubenswrapper[4991]: I0929 10:03:13.937703 4991 generic.go:334] "Generic (PLEG): container finished" podID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerID="ae978d0bc29f33665facfcd7f25bfbcefca3d26e1fec6efd4deb68c631a0569f" exitCode=0 Sep 29 10:03:13 crc kubenswrapper[4991]: I0929 10:03:13.937783 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerDied","Data":"ae978d0bc29f33665facfcd7f25bfbcefca3d26e1fec6efd4deb68c631a0569f"} Sep 29 10:03:14 crc kubenswrapper[4991]: I0929 10:03:14.953090 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerStarted","Data":"2c3e5cba7455953bc5c8cefcfcc58a23fd4ed5b3680f90f55b145e9febc36291"} Sep 29 10:03:14 crc kubenswrapper[4991]: I0929 10:03:14.974115 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4m7fw" podStartSLOduration=2.39988878 podStartE2EDuration="9.974094427s" podCreationTimestamp="2025-09-29 10:03:05 +0000 UTC" firstStartedPulling="2025-09-29 10:03:06.840478457 +0000 UTC m=+1522.696406485" lastFinishedPulling="2025-09-29 10:03:14.414684104 +0000 UTC m=+1530.270612132" observedRunningTime="2025-09-29 10:03:14.970288647 +0000 UTC m=+1530.826216685" watchObservedRunningTime="2025-09-29 10:03:14.974094427 +0000 UTC m=+1530.830022455" Sep 29 10:03:15 crc kubenswrapper[4991]: I0929 10:03:15.538076 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:15 crc kubenswrapper[4991]: I0929 10:03:15.538211 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:16 crc kubenswrapper[4991]: I0929 10:03:16.593466 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4m7fw" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="registry-server" probeResult="failure" output=< Sep 29 10:03:16 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:03:16 crc kubenswrapper[4991]: > Sep 29 10:03:25 crc kubenswrapper[4991]: I0929 10:03:25.594792 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:25 crc kubenswrapper[4991]: I0929 10:03:25.652777 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:25 crc kubenswrapper[4991]: I0929 10:03:25.838144 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:27 crc kubenswrapper[4991]: I0929 10:03:27.096727 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4m7fw" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="registry-server" containerID="cri-o://2c3e5cba7455953bc5c8cefcfcc58a23fd4ed5b3680f90f55b145e9febc36291" gracePeriod=2 Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.146123 4991 generic.go:334] "Generic (PLEG): container finished" podID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerID="2c3e5cba7455953bc5c8cefcfcc58a23fd4ed5b3680f90f55b145e9febc36291" exitCode=0 Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.146191 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerDied","Data":"2c3e5cba7455953bc5c8cefcfcc58a23fd4ed5b3680f90f55b145e9febc36291"} Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.298985 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.357401 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbzxq\" (UniqueName: \"kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq\") pod \"97bb41bd-6b75-4a13-a0cd-242dd1018108\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.357538 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content\") pod \"97bb41bd-6b75-4a13-a0cd-242dd1018108\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.357639 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities\") pod \"97bb41bd-6b75-4a13-a0cd-242dd1018108\" (UID: \"97bb41bd-6b75-4a13-a0cd-242dd1018108\") " Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.358797 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities" (OuterVolumeSpecName: "utilities") pod "97bb41bd-6b75-4a13-a0cd-242dd1018108" (UID: "97bb41bd-6b75-4a13-a0cd-242dd1018108"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.363854 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq" (OuterVolumeSpecName: "kube-api-access-hbzxq") pod "97bb41bd-6b75-4a13-a0cd-242dd1018108" (UID: "97bb41bd-6b75-4a13-a0cd-242dd1018108"). InnerVolumeSpecName "kube-api-access-hbzxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.402544 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97bb41bd-6b75-4a13-a0cd-242dd1018108" (UID: "97bb41bd-6b75-4a13-a0cd-242dd1018108"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.460209 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.460251 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbzxq\" (UniqueName: \"kubernetes.io/projected/97bb41bd-6b75-4a13-a0cd-242dd1018108-kube-api-access-hbzxq\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:28 crc kubenswrapper[4991]: I0929 10:03:28.460266 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bb41bd-6b75-4a13-a0cd-242dd1018108-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.160125 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m7fw" event={"ID":"97bb41bd-6b75-4a13-a0cd-242dd1018108","Type":"ContainerDied","Data":"d83510cdfdffb41ae61cdb881e19ada6590ac06c6babb570de46fdc33cbad87f"} Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.161060 4991 scope.go:117] "RemoveContainer" containerID="2c3e5cba7455953bc5c8cefcfcc58a23fd4ed5b3680f90f55b145e9febc36291" Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.160202 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m7fw" Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.189138 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.191307 4991 scope.go:117] "RemoveContainer" containerID="ae978d0bc29f33665facfcd7f25bfbcefca3d26e1fec6efd4deb68c631a0569f" Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.200796 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4m7fw"] Sep 29 10:03:29 crc kubenswrapper[4991]: I0929 10:03:29.222680 4991 scope.go:117] "RemoveContainer" containerID="6da4bc292359c519e1d94bcd923c8ed862ddccce04f853ac18f460cb85dc87ca" Sep 29 10:03:30 crc kubenswrapper[4991]: I0929 10:03:30.941260 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" path="/var/lib/kubelet/pods/97bb41bd-6b75-4a13-a0cd-242dd1018108/volumes" Sep 29 10:03:34 crc kubenswrapper[4991]: I0929 10:03:34.555658 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 29 10:03:46 crc kubenswrapper[4991]: I0929 10:03:46.833911 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zhvvg"] Sep 29 10:03:46 crc kubenswrapper[4991]: I0929 10:03:46.846666 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zhvvg"] Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.994365 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429e29da-7361-452f-8a5e-f42633c6d4b9" path="/var/lib/kubelet/pods/429e29da-7361-452f-8a5e-f42633c6d4b9/volumes" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.995551 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dckn7"] Sep 29 10:03:47 crc kubenswrapper[4991]: E0929 10:03:46.996263 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="extract-utilities" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.996280 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="extract-utilities" Sep 29 10:03:47 crc kubenswrapper[4991]: E0929 10:03:46.996314 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="extract-content" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.996322 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="extract-content" Sep 29 10:03:47 crc kubenswrapper[4991]: E0929 10:03:46.996341 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="registry-server" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.996349 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="registry-server" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.996657 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb41bd-6b75-4a13-a0cd-242dd1018108" containerName="registry-server" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.998025 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dckn7"] Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:46.998162 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.132155 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.132395 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.132610 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jrb\" (UniqueName: \"kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.236555 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.236887 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.236992 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jrb\" (UniqueName: \"kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.268096 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.268554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jrb\" (UniqueName: \"kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.271403 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle\") pod \"heat-db-sync-dckn7\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.357639 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dckn7" Sep 29 10:03:47 crc kubenswrapper[4991]: W0929 10:03:47.935899 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb62b4b30_b2b5_4cfa_8373_4fde7e9b2078.slice/crio-578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1 WatchSource:0}: Error finding container 578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1: Status 404 returned error can't find the container with id 578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1 Sep 29 10:03:47 crc kubenswrapper[4991]: I0929 10:03:47.937746 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dckn7"] Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.404122 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dckn7" event={"ID":"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078","Type":"ContainerStarted","Data":"578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1"} Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.996343 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.996617 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-central-agent" containerID="cri-o://78746e6e727964338536dc7705071ca712543b44e2256e04eedc9cdeb51af57f" gracePeriod=30 Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.996745 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="proxy-httpd" containerID="cri-o://fdfbf1aa7940fdd41ed9b4348285661b79538419e051f307ed1f019ee7b4b608" gracePeriod=30 Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.996783 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="sg-core" containerID="cri-o://70e7cc553f9d1d6aa33627d267676749a9c5545138fb3e593179a1e9e7c7067a" gracePeriod=30 Sep 29 10:03:48 crc kubenswrapper[4991]: I0929 10:03:48.996813 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-notification-agent" containerID="cri-o://ac796719d9674ff577e66d1332bf565b3bae09102d73902a42b9fc5e35ffe50f" gracePeriod=30 Sep 29 10:03:49 crc kubenswrapper[4991]: I0929 10:03:49.295654 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:03:49 crc kubenswrapper[4991]: I0929 10:03:49.429192 4991 generic.go:334] "Generic (PLEG): container finished" podID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerID="70e7cc553f9d1d6aa33627d267676749a9c5545138fb3e593179a1e9e7c7067a" exitCode=2 Sep 29 10:03:49 crc kubenswrapper[4991]: I0929 10:03:49.429565 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerDied","Data":"70e7cc553f9d1d6aa33627d267676749a9c5545138fb3e593179a1e9e7c7067a"} Sep 29 10:03:50 crc kubenswrapper[4991]: I0929 10:03:50.108216 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:03:50 crc kubenswrapper[4991]: I0929 10:03:50.447547 4991 generic.go:334] "Generic (PLEG): container finished" podID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerID="fdfbf1aa7940fdd41ed9b4348285661b79538419e051f307ed1f019ee7b4b608" exitCode=0 Sep 29 10:03:50 crc kubenswrapper[4991]: I0929 10:03:50.447855 4991 generic.go:334] "Generic (PLEG): container finished" podID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerID="78746e6e727964338536dc7705071ca712543b44e2256e04eedc9cdeb51af57f" exitCode=0 Sep 29 10:03:50 crc kubenswrapper[4991]: I0929 10:03:50.447736 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerDied","Data":"fdfbf1aa7940fdd41ed9b4348285661b79538419e051f307ed1f019ee7b4b608"} Sep 29 10:03:50 crc kubenswrapper[4991]: I0929 10:03:50.447889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerDied","Data":"78746e6e727964338536dc7705071ca712543b44e2256e04eedc9cdeb51af57f"} Sep 29 10:03:53 crc kubenswrapper[4991]: I0929 10:03:53.509665 4991 generic.go:334] "Generic (PLEG): container finished" podID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerID="ac796719d9674ff577e66d1332bf565b3bae09102d73902a42b9fc5e35ffe50f" exitCode=0 Sep 29 10:03:53 crc kubenswrapper[4991]: I0929 10:03:53.509735 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerDied","Data":"ac796719d9674ff577e66d1332bf565b3bae09102d73902a42b9fc5e35ffe50f"} Sep 29 10:03:53 crc kubenswrapper[4991]: I0929 10:03:53.984401 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.142544 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143037 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143103 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n86gw\" (UniqueName: \"kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143163 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143276 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143312 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143355 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.143413 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle\") pod \"f9b187f4-6b79-415e-b544-4ab1f8130849\" (UID: \"f9b187f4-6b79-415e-b544-4ab1f8130849\") " Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.148467 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.148741 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.157156 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts" (OuterVolumeSpecName: "scripts") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.157155 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw" (OuterVolumeSpecName: "kube-api-access-n86gw") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "kube-api-access-n86gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.212800 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.247585 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n86gw\" (UniqueName: \"kubernetes.io/projected/f9b187f4-6b79-415e-b544-4ab1f8130849-kube-api-access-n86gw\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.247620 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.247634 4991 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.247645 4991 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9b187f4-6b79-415e-b544-4ab1f8130849-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.247656 4991 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.307710 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.314702 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.339257 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data" (OuterVolumeSpecName: "config-data") pod "f9b187f4-6b79-415e-b544-4ab1f8130849" (UID: "f9b187f4-6b79-415e-b544-4ab1f8130849"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.350294 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.350335 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.350349 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b187f4-6b79-415e-b544-4ab1f8130849-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.536544 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9b187f4-6b79-415e-b544-4ab1f8130849","Type":"ContainerDied","Data":"06f9e1f0151fcd0bb9a858bc4c3ccc67f2afe4fd24f832e17d345bd8bd46c30a"} Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.536834 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.536868 4991 scope.go:117] "RemoveContainer" containerID="fdfbf1aa7940fdd41ed9b4348285661b79538419e051f307ed1f019ee7b4b608" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.591340 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.603385 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.614881 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:54 crc kubenswrapper[4991]: E0929 10:03:54.615384 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="proxy-httpd" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615404 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="proxy-httpd" Sep 29 10:03:54 crc kubenswrapper[4991]: E0929 10:03:54.615421 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="sg-core" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615427 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="sg-core" Sep 29 10:03:54 crc kubenswrapper[4991]: E0929 10:03:54.615453 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-central-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615460 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-central-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: E0929 10:03:54.615472 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-notification-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615478 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-notification-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615703 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-central-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615724 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="sg-core" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615732 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="proxy-httpd" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.615746 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" containerName="ceilometer-notification-agent" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.617845 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.620330 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.621813 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.622383 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.634814 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.764589 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.764703 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.764828 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkjw\" (UniqueName: \"kubernetes.io/projected/b194689f-6d7e-47aa-b796-7f0e959ce6b1-kube-api-access-lqkjw\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.764856 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.764972 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-scripts\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.765057 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.765117 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-config-data\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.765215 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.801134 4991 scope.go:117] "RemoveContainer" containerID="70e7cc553f9d1d6aa33627d267676749a9c5545138fb3e593179a1e9e7c7067a" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.859975 4991 scope.go:117] "RemoveContainer" containerID="ac796719d9674ff577e66d1332bf565b3bae09102d73902a42b9fc5e35ffe50f" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.866777 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-config-data\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867586 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867654 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867707 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867785 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkjw\" (UniqueName: \"kubernetes.io/projected/b194689f-6d7e-47aa-b796-7f0e959ce6b1-kube-api-access-lqkjw\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867807 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867868 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-scripts\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.867918 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.868286 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-run-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.870523 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b194689f-6d7e-47aa-b796-7f0e959ce6b1-log-httpd\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.872820 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-config-data\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.875731 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.877837 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.878337 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-scripts\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.888537 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b194689f-6d7e-47aa-b796-7f0e959ce6b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.896116 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkjw\" (UniqueName: \"kubernetes.io/projected/b194689f-6d7e-47aa-b796-7f0e959ce6b1-kube-api-access-lqkjw\") pod \"ceilometer-0\" (UID: \"b194689f-6d7e-47aa-b796-7f0e959ce6b1\") " pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.944611 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.947284 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b187f4-6b79-415e-b544-4ab1f8130849" path="/var/lib/kubelet/pods/f9b187f4-6b79-415e-b544-4ab1f8130849/volumes" Sep 29 10:03:54 crc kubenswrapper[4991]: I0929 10:03:54.997887 4991 scope.go:117] "RemoveContainer" containerID="78746e6e727964338536dc7705071ca712543b44e2256e04eedc9cdeb51af57f" Sep 29 10:03:55 crc kubenswrapper[4991]: I0929 10:03:55.303836 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" containerID="cri-o://a7e3c30afcc534a481914ee298121a079ed738e1d07dddb66ab852e036c2478d" gracePeriod=604795 Sep 29 10:03:55 crc kubenswrapper[4991]: I0929 10:03:55.595299 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" containerID="cri-o://a6b30445ea27df6dddee6be63497a9e70ff273642530d6926415c40cad173d44" gracePeriod=604794 Sep 29 10:03:55 crc kubenswrapper[4991]: I0929 10:03:55.607713 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 29 10:03:56 crc kubenswrapper[4991]: I0929 10:03:56.571238 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b194689f-6d7e-47aa-b796-7f0e959ce6b1","Type":"ContainerStarted","Data":"6588c45feb471229dc87a8be84c2d1cdf02a0d2983ef0541886861a4b4b26723"} Sep 29 10:03:57 crc kubenswrapper[4991]: I0929 10:03:57.426453 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Sep 29 10:03:57 crc kubenswrapper[4991]: I0929 10:03:57.763636 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Sep 29 10:04:02 crc kubenswrapper[4991]: I0929 10:04:02.663206 4991 generic.go:334] "Generic (PLEG): container finished" podID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerID="a7e3c30afcc534a481914ee298121a079ed738e1d07dddb66ab852e036c2478d" exitCode=0 Sep 29 10:04:02 crc kubenswrapper[4991]: I0929 10:04:02.663326 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerDied","Data":"a7e3c30afcc534a481914ee298121a079ed738e1d07dddb66ab852e036c2478d"} Sep 29 10:04:03 crc kubenswrapper[4991]: I0929 10:04:03.695885 4991 generic.go:334] "Generic (PLEG): container finished" podID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerID="a6b30445ea27df6dddee6be63497a9e70ff273642530d6926415c40cad173d44" exitCode=0 Sep 29 10:04:03 crc kubenswrapper[4991]: I0929 10:04:03.696049 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerDied","Data":"a6b30445ea27df6dddee6be63497a9e70ff273642530d6926415c40cad173d44"} Sep 29 10:04:07 crc kubenswrapper[4991]: I0929 10:04:07.427022 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.099185 4991 scope.go:117] "RemoveContainer" containerID="bb4a35b1701bd73c145458207f8e3649e39d503557abd4710d96181c0743d7fe" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.426722 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440207 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440256 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440297 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440353 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440382 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440417 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440436 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440464 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440496 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqn7\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440515 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.440531 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls\") pod \"30e6fe0c-d910-462d-8181-b99f4b28091f\" (UID: \"30e6fe0c-d910-462d-8181-b99f4b28091f\") " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.442865 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.443292 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.443613 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.451390 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info" (OuterVolumeSpecName: "pod-info") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.453366 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7" (OuterVolumeSpecName: "kube-api-access-mrqn7") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "kube-api-access-mrqn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.473195 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.473320 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.492090 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560183 4991 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/30e6fe0c-d910-462d-8181-b99f4b28091f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560464 4991 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/30e6fe0c-d910-462d-8181-b99f4b28091f-pod-info\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560543 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqn7\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-kube-api-access-mrqn7\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560617 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560701 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560791 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.560868 4991 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.563537 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.596890 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.675132 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.704921 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data" (OuterVolumeSpecName: "config-data") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.804705 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf" (OuterVolumeSpecName: "server-conf") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.810210 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "30e6fe0c-d910-462d-8181-b99f4b28091f" (UID: "30e6fe0c-d910-462d-8181-b99f4b28091f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.845566 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/30e6fe0c-d910-462d-8181-b99f4b28091f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.845607 4991 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-server-conf\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.845629 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/30e6fe0c-d910-462d-8181-b99f4b28091f-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.913670 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"30e6fe0c-d910-462d-8181-b99f4b28091f","Type":"ContainerDied","Data":"332cbbf011fc1182d497b8480f43fc5d156983bda64a52f16bfd8f7fff0d7ef1"} Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.913727 4991 scope.go:117] "RemoveContainer" containerID="a7e3c30afcc534a481914ee298121a079ed738e1d07dddb66ab852e036c2478d" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.914012 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:08 crc kubenswrapper[4991]: I0929 10:04:08.998057 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.012338 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.027376 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:04:09 crc kubenswrapper[4991]: E0929 10:04:09.028079 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="setup-container" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.028157 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="setup-container" Sep 29 10:04:09 crc kubenswrapper[4991]: E0929 10:04:09.028289 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.028364 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.028707 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.030153 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.033071 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.033397 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.033844 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.034764 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.035382 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.035599 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.041341 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.052782 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nph8c" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.156540 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.156598 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.159349 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.159401 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.159589 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.159900 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.159935 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.160025 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.160056 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.160079 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9qc\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-kube-api-access-gb9qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.160192 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262653 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262704 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9qc\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-kube-api-access-gb9qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262786 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262847 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262871 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262921 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.262960 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.263035 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.263186 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.263212 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.263259 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.264619 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.265229 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.265657 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.265864 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.271534 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.274775 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.274790 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.275050 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.275085 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.281216 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.283910 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9qc\" (UniqueName: \"kubernetes.io/projected/e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0-kube-api-access-gb9qc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.318438 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0\") " pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:09 crc kubenswrapper[4991]: I0929 10:04:09.370087 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:10 crc kubenswrapper[4991]: I0929 10:04:10.941502 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" path="/var/lib/kubelet/pods/30e6fe0c-d910-462d-8181-b99f4b28091f/volumes" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.316680 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.319375 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.321645 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.344452 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415420 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5v8\" (UniqueName: \"kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415502 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415638 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415713 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415754 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415804 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.415841 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518133 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518228 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518341 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518369 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5v8\" (UniqueName: \"kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518473 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518568 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.518626 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.522262 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.528424 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.528795 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.528797 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.529075 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.532615 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.548905 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5v8\" (UniqueName: \"kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8\") pod \"dnsmasq-dns-594cb89c79-qfwdb\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:11 crc kubenswrapper[4991]: I0929 10:04:11.664725 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:12 crc kubenswrapper[4991]: I0929 10:04:12.763979 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="30e6fe0c-d910-462d-8181-b99f4b28091f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Sep 29 10:04:14 crc kubenswrapper[4991]: I0929 10:04:14.761620 4991 scope.go:117] "RemoveContainer" containerID="a554143176e372518bbb5dce50edd9c99c582433babaa80d985d2c75bc3057b5" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.823404 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.979261 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.979588 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.979793 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.979883 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980028 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skqld\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980074 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980179 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980291 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980361 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980531 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980627 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd\") pod \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\" (UID: \"71fdae0b-4fc4-4673-9088-80dd01eb7ce8\") " Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.980636 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.981672 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.982917 4991 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.983302 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.987964 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.988214 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.990677 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:16 crc kubenswrapper[4991]: I0929 10:04:16.994253 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld" (OuterVolumeSpecName: "kube-api-access-skqld") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "kube-api-access-skqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.015641 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info" (OuterVolumeSpecName: "pod-info") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.020307 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.053353 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data" (OuterVolumeSpecName: "config-data") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.087168 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf" (OuterVolumeSpecName: "server-conf") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.093871 4991 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-pod-info\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.093920 4991 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.093934 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skqld\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-kube-api-access-skqld\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.094004 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.094018 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.094030 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.094042 4991 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.094055 4991 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-server-conf\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.099796 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71fdae0b-4fc4-4673-9088-80dd01eb7ce8","Type":"ContainerDied","Data":"1539f70c79ee412c45b7cb56662426715244ad6fcbddfdd4b62452fbe9d25f88"} Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.099880 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.150742 4991 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.195600 4991 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.196391 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "71fdae0b-4fc4-4673-9088-80dd01eb7ce8" (UID: "71fdae0b-4fc4-4673-9088-80dd01eb7ce8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.297641 4991 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71fdae0b-4fc4-4673-9088-80dd01eb7ce8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.441267 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.455343 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.476327 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:04:17 crc kubenswrapper[4991]: E0929 10:04:17.477045 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.477063 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" Sep 29 10:04:17 crc kubenswrapper[4991]: E0929 10:04:17.477073 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="setup-container" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.477079 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="setup-container" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.477332 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" containerName="rabbitmq" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.478859 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.487658 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.487683 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.487743 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.487911 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.488022 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6lzpf" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.488124 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.488190 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.510584 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.633101 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.633417 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.633581 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.633736 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.633914 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634038 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634139 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634335 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634517 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634623 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.634726 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7b7f\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-kube-api-access-x7b7f\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737058 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737212 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737267 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737301 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737569 4991 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737745 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737855 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737896 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737920 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7b7f\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-kube-api-access-x7b7f\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.737993 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.738018 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.738042 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.738053 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.738718 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.739169 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.739686 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.739692 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.745561 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.745588 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.746597 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.753821 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.762016 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7b7f\" (UniqueName: \"kubernetes.io/projected/bf158d99-d08f-4ce7-b60d-2c685d55a6f7-kube-api-access-x7b7f\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.834334 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"bf158d99-d08f-4ce7-b60d-2c685d55a6f7\") " pod="openstack/rabbitmq-server-0" Sep 29 10:04:17 crc kubenswrapper[4991]: I0929 10:04:17.854283 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.413013 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.413078 4991 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.413195 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-dckn7_openstack(b62b4b30-b2b5-4cfa-8373-4fde7e9b2078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.414350 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-dckn7" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.845555 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.845613 4991 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.845770 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n578h5c4h68dh6hdfh656h5bch68dh679h66bh5ffhf9h58fh85h6dh677h6h5cbh5f6h689h4h5cbh5bch5h5dh568hc7h574h6h5f4h596h65dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqkjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b194689f-6d7e-47aa-b796-7f0e959ce6b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 10:04:18 crc kubenswrapper[4991]: I0929 10:04:18.866057 4991 scope.go:117] "RemoveContainer" containerID="ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d" Sep 29 10:04:18 crc kubenswrapper[4991]: I0929 10:04:18.886580 4991 scope.go:117] "RemoveContainer" containerID="ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d" Sep 29 10:04:18 crc kubenswrapper[4991]: I0929 10:04:18.946750 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71fdae0b-4fc4-4673-9088-80dd01eb7ce8" path="/var/lib/kubelet/pods/71fdae0b-4fc4-4673-9088-80dd01eb7ce8/volumes" Sep 29 10:04:18 crc kubenswrapper[4991]: I0929 10:04:18.958263 4991 scope.go:117] "RemoveContainer" containerID="a6b30445ea27df6dddee6be63497a9e70ff273642530d6926415c40cad173d44" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.958581 4991 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-cell1-server-0_openstack_30e6fe0c-d910-462d-8181-b99f4b28091f_0 in pod sandbox 332cbbf011fc1182d497b8480f43fc5d156983bda64a52f16bfd8f7fff0d7ef1: identifier is not a container" containerID="ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d" Sep 29 10:04:18 crc kubenswrapper[4991]: E0929 10:04:18.958629 4991 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-cell1-server-0_openstack_30e6fe0c-d910-462d-8181-b99f4b28091f_0 in pod sandbox 332cbbf011fc1182d497b8480f43fc5d156983bda64a52f16bfd8f7fff0d7ef1: identifier is not a container" containerID="ed978ee5eb9dc72c373392ddcedd6ab53699ab604b87eb2c882a1b37204a079d" Sep 29 10:04:19 crc kubenswrapper[4991]: E0929 10:04:19.173487 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-dckn7" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" Sep 29 10:04:19 crc kubenswrapper[4991]: I0929 10:04:19.204555 4991 scope.go:117] "RemoveContainer" containerID="7910fd79a1c65359a5d7994d2bfbc4276d57b7a7ff1a657124cf305e57695eac" Sep 29 10:04:19 crc kubenswrapper[4991]: I0929 10:04:19.467821 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:19 crc kubenswrapper[4991]: I0929 10:04:19.675640 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 29 10:04:19 crc kubenswrapper[4991]: W0929 10:04:19.690709 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode731a49f_4ae8_4f37_b8c6_e7ab6af02cf0.slice/crio-d9ea92f5cd580d17ad9d6d7ffabf3ab9f93b8eb3a322a5036cf81e516d08d6e2 WatchSource:0}: Error finding container d9ea92f5cd580d17ad9d6d7ffabf3ab9f93b8eb3a322a5036cf81e516d08d6e2: Status 404 returned error can't find the container with id d9ea92f5cd580d17ad9d6d7ffabf3ab9f93b8eb3a322a5036cf81e516d08d6e2 Sep 29 10:04:19 crc kubenswrapper[4991]: I0929 10:04:19.810479 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 29 10:04:20 crc kubenswrapper[4991]: I0929 10:04:20.146652 4991 generic.go:334] "Generic (PLEG): container finished" podID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerID="d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7" exitCode=0 Sep 29 10:04:20 crc kubenswrapper[4991]: I0929 10:04:20.146881 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" event={"ID":"71a3f9e7-e685-480d-b40f-7dc03205099e","Type":"ContainerDied","Data":"d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7"} Sep 29 10:04:20 crc kubenswrapper[4991]: I0929 10:04:20.147016 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" event={"ID":"71a3f9e7-e685-480d-b40f-7dc03205099e","Type":"ContainerStarted","Data":"3c60b3af7e7edb248f67232a5d4398d06ef069f27efc52993ec49f2786889fc8"} Sep 29 10:04:20 crc kubenswrapper[4991]: I0929 10:04:20.148414 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf158d99-d08f-4ce7-b60d-2c685d55a6f7","Type":"ContainerStarted","Data":"71b0fefe654ccdf723cf0362516a67ef5cc164d33448142135882c67629e040c"} Sep 29 10:04:20 crc kubenswrapper[4991]: I0929 10:04:20.152438 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0","Type":"ContainerStarted","Data":"d9ea92f5cd580d17ad9d6d7ffabf3ab9f93b8eb3a322a5036cf81e516d08d6e2"} Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.166004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" event={"ID":"71a3f9e7-e685-480d-b40f-7dc03205099e","Type":"ContainerStarted","Data":"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b"} Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.166524 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.171247 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf158d99-d08f-4ce7-b60d-2c685d55a6f7","Type":"ContainerStarted","Data":"213dae745aae283ea0512fce555f5b349164bc68b7c38f1d0968c04aa0e4ea33"} Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.173967 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0","Type":"ContainerStarted","Data":"50901b4a4a421e7f07efe981a38b57fe5479cfae62c8836be853fe9e233edf5d"} Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.176394 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b194689f-6d7e-47aa-b796-7f0e959ce6b1","Type":"ContainerStarted","Data":"b3705c459b96a37b1c401bae7955eac89a2d814702a6fb9a9dc4bd31c8e503cf"} Sep 29 10:04:21 crc kubenswrapper[4991]: I0929 10:04:21.196141 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" podStartSLOduration=10.196117023 podStartE2EDuration="10.196117023s" podCreationTimestamp="2025-09-29 10:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:04:21.18762438 +0000 UTC m=+1597.043552418" watchObservedRunningTime="2025-09-29 10:04:21.196117023 +0000 UTC m=+1597.052045051" Sep 29 10:04:22 crc kubenswrapper[4991]: I0929 10:04:22.190211 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b194689f-6d7e-47aa-b796-7f0e959ce6b1","Type":"ContainerStarted","Data":"746572dc6dcafc1fdf06016339d793486c80634459829a0ebbfcc6887561a7fd"} Sep 29 10:04:22 crc kubenswrapper[4991]: E0929 10:04:22.895206 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b194689f-6d7e-47aa-b796-7f0e959ce6b1" Sep 29 10:04:23 crc kubenswrapper[4991]: I0929 10:04:23.208562 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b194689f-6d7e-47aa-b796-7f0e959ce6b1","Type":"ContainerStarted","Data":"478b405724e5a19e08f3358215c34c58dd857c685fa4800c0fec3688c6c28ff2"} Sep 29 10:04:23 crc kubenswrapper[4991]: I0929 10:04:23.209399 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 29 10:04:23 crc kubenswrapper[4991]: E0929 10:04:23.211611 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b194689f-6d7e-47aa-b796-7f0e959ce6b1" Sep 29 10:04:24 crc kubenswrapper[4991]: E0929 10:04:24.223063 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b194689f-6d7e-47aa-b796-7f0e959ce6b1" Sep 29 10:04:26 crc kubenswrapper[4991]: I0929 10:04:26.666078 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:26 crc kubenswrapper[4991]: I0929 10:04:26.727353 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:04:26 crc kubenswrapper[4991]: I0929 10:04:26.728667 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="dnsmasq-dns" containerID="cri-o://786c8af6ca8b957cdcc9cf7e0a745774fa7bf0e571ff0db36144bdffab6cd14a" gracePeriod=10 Sep 29 10:04:26 crc kubenswrapper[4991]: I0929 10:04:26.985752 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hrbsx"] Sep 29 10:04:26 crc kubenswrapper[4991]: I0929 10:04:26.988124 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.012264 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hrbsx"] Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.113824 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114019 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp2k7\" (UniqueName: \"kubernetes.io/projected/ab8e114e-a9e2-4847-a068-fcb601984824-kube-api-access-rp2k7\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114085 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114171 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114219 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114304 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-config\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.114459 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217348 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217644 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217704 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-config\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217801 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217825 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217865 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp2k7\" (UniqueName: \"kubernetes.io/projected/ab8e114e-a9e2-4847-a068-fcb601984824-kube-api-access-rp2k7\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.217913 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.219116 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.219778 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.220464 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.221014 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-config\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.221619 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.223255 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8e114e-a9e2-4847-a068-fcb601984824-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.245341 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp2k7\" (UniqueName: \"kubernetes.io/projected/ab8e114e-a9e2-4847-a068-fcb601984824-kube-api-access-rp2k7\") pod \"dnsmasq-dns-5596c69fcc-hrbsx\" (UID: \"ab8e114e-a9e2-4847-a068-fcb601984824\") " pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.275196 4991 generic.go:334] "Generic (PLEG): container finished" podID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerID="786c8af6ca8b957cdcc9cf7e0a745774fa7bf0e571ff0db36144bdffab6cd14a" exitCode=0 Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.275245 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" event={"ID":"1e665248-de57-45c0-8b0e-5ebc858626aa","Type":"ContainerDied","Data":"786c8af6ca8b957cdcc9cf7e0a745774fa7bf0e571ff0db36144bdffab6cd14a"} Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.319529 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.493211 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.629882 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.630006 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.630066 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbzrc\" (UniqueName: \"kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.630464 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.630501 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.630580 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config\") pod \"1e665248-de57-45c0-8b0e-5ebc858626aa\" (UID: \"1e665248-de57-45c0-8b0e-5ebc858626aa\") " Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.658228 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc" (OuterVolumeSpecName: "kube-api-access-dbzrc") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "kube-api-access-dbzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.746765 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbzrc\" (UniqueName: \"kubernetes.io/projected/1e665248-de57-45c0-8b0e-5ebc858626aa-kube-api-access-dbzrc\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.790802 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.815610 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.816290 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.816365 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config" (OuterVolumeSpecName: "config") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.848485 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e665248-de57-45c0-8b0e-5ebc858626aa" (UID: "1e665248-de57-45c0-8b0e-5ebc858626aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.853713 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.853750 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.853760 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-config\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.853769 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.853778 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e665248-de57-45c0-8b0e-5ebc858626aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:27 crc kubenswrapper[4991]: I0929 10:04:27.981231 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-hrbsx"] Sep 29 10:04:28 crc kubenswrapper[4991]: W0929 10:04:28.004489 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8e114e_a9e2_4847_a068_fcb601984824.slice/crio-aca459a0865327ad30d95003ccc9b26ae42c867484c2ab0b51e9ac6cc662ca51 WatchSource:0}: Error finding container aca459a0865327ad30d95003ccc9b26ae42c867484c2ab0b51e9ac6cc662ca51: Status 404 returned error can't find the container with id aca459a0865327ad30d95003ccc9b26ae42c867484c2ab0b51e9ac6cc662ca51 Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.293734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" event={"ID":"1e665248-de57-45c0-8b0e-5ebc858626aa","Type":"ContainerDied","Data":"110f285f49c4c20ca0c85f5f8ebe1c43d9e7f8c3c3402abe482ec4d8d4d237a2"} Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.293787 4991 scope.go:117] "RemoveContainer" containerID="786c8af6ca8b957cdcc9cf7e0a745774fa7bf0e571ff0db36144bdffab6cd14a" Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.293904 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-fj7w9" Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.298076 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" event={"ID":"ab8e114e-a9e2-4847-a068-fcb601984824","Type":"ContainerStarted","Data":"aca459a0865327ad30d95003ccc9b26ae42c867484c2ab0b51e9ac6cc662ca51"} Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.332916 4991 scope.go:117] "RemoveContainer" containerID="fe268abd38322fc4aaf5c7b3ac4d15b335b9033fb1be6dc4c7fb49b8e010e539" Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.365150 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.384036 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-fj7w9"] Sep 29 10:04:28 crc kubenswrapper[4991]: I0929 10:04:28.940802 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" path="/var/lib/kubelet/pods/1e665248-de57-45c0-8b0e-5ebc858626aa/volumes" Sep 29 10:04:29 crc kubenswrapper[4991]: I0929 10:04:29.323835 4991 generic.go:334] "Generic (PLEG): container finished" podID="ab8e114e-a9e2-4847-a068-fcb601984824" containerID="ff8b7174537c980b3c755caf1a01f1de9b3a9117e01cd41138bd43cd2b8f76e7" exitCode=0 Sep 29 10:04:29 crc kubenswrapper[4991]: I0929 10:04:29.323959 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" event={"ID":"ab8e114e-a9e2-4847-a068-fcb601984824","Type":"ContainerDied","Data":"ff8b7174537c980b3c755caf1a01f1de9b3a9117e01cd41138bd43cd2b8f76e7"} Sep 29 10:04:30 crc kubenswrapper[4991]: I0929 10:04:30.343246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" event={"ID":"ab8e114e-a9e2-4847-a068-fcb601984824","Type":"ContainerStarted","Data":"23cfe302a41db93dbf152d590ec4a0e71396425902ec312357148ddfac099917"} Sep 29 10:04:30 crc kubenswrapper[4991]: I0929 10:04:30.343972 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:30 crc kubenswrapper[4991]: I0929 10:04:30.372529 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" podStartSLOduration=4.372509142 podStartE2EDuration="4.372509142s" podCreationTimestamp="2025-09-29 10:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:04:30.360432384 +0000 UTC m=+1606.216360422" watchObservedRunningTime="2025-09-29 10:04:30.372509142 +0000 UTC m=+1606.228437170" Sep 29 10:04:33 crc kubenswrapper[4991]: I0929 10:04:33.387412 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dckn7" event={"ID":"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078","Type":"ContainerStarted","Data":"2122d552c07017d8693458aec76f48ae4314f2de03f25dc91d33c2e0bbd2a2c9"} Sep 29 10:04:33 crc kubenswrapper[4991]: I0929 10:04:33.403979 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-dckn7" podStartSLOduration=3.207939886 podStartE2EDuration="47.403962743s" podCreationTimestamp="2025-09-29 10:03:46 +0000 UTC" firstStartedPulling="2025-09-29 10:03:47.938846213 +0000 UTC m=+1563.794774231" lastFinishedPulling="2025-09-29 10:04:32.13486906 +0000 UTC m=+1607.990797088" observedRunningTime="2025-09-29 10:04:33.401836557 +0000 UTC m=+1609.257764585" watchObservedRunningTime="2025-09-29 10:04:33.403962743 +0000 UTC m=+1609.259890771" Sep 29 10:04:36 crc kubenswrapper[4991]: I0929 10:04:36.422641 4991 generic.go:334] "Generic (PLEG): container finished" podID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" containerID="2122d552c07017d8693458aec76f48ae4314f2de03f25dc91d33c2e0bbd2a2c9" exitCode=0 Sep 29 10:04:36 crc kubenswrapper[4991]: I0929 10:04:36.422731 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dckn7" event={"ID":"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078","Type":"ContainerDied","Data":"2122d552c07017d8693458aec76f48ae4314f2de03f25dc91d33c2e0bbd2a2c9"} Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.321134 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-hrbsx" Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.377066 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.377296 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="dnsmasq-dns" containerID="cri-o://2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b" gracePeriod=10 Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.946638 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.946708 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:04:37 crc kubenswrapper[4991]: I0929 10:04:37.957141 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.143400 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dckn7" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.241161 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle\") pod \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.241387 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data\") pod \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.241530 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2jrb\" (UniqueName: \"kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb\") pod \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\" (UID: \"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.247361 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb" (OuterVolumeSpecName: "kube-api-access-b2jrb") pod "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" (UID: "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078"). InnerVolumeSpecName "kube-api-access-b2jrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.287619 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" (UID: "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.346003 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2jrb\" (UniqueName: \"kubernetes.io/projected/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-kube-api-access-b2jrb\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.346031 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.346076 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.352047 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data" (OuterVolumeSpecName: "config-data") pod "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" (UID: "b62b4b30-b2b5-4cfa-8373-4fde7e9b2078"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447008 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447252 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5v8\" (UniqueName: \"kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447309 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447356 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447379 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447463 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447502 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam\") pod \"71a3f9e7-e685-480d-b40f-7dc03205099e\" (UID: \"71a3f9e7-e685-480d-b40f-7dc03205099e\") " Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447822 4991 generic.go:334] "Generic (PLEG): container finished" podID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerID="2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b" exitCode=0 Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447943 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" event={"ID":"71a3f9e7-e685-480d-b40f-7dc03205099e","Type":"ContainerDied","Data":"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b"} Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447967 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.448000 4991 scope.go:117] "RemoveContainer" containerID="2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.447988 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-qfwdb" event={"ID":"71a3f9e7-e685-480d-b40f-7dc03205099e","Type":"ContainerDied","Data":"3c60b3af7e7edb248f67232a5d4398d06ef069f27efc52993ec49f2786889fc8"} Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.449514 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.449643 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dckn7" event={"ID":"b62b4b30-b2b5-4cfa-8373-4fde7e9b2078","Type":"ContainerDied","Data":"578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1"} Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.449663 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="578ce2ac8a15e645c352063788024cbb78308d5d97682c533c4a6d33490673c1" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.449713 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dckn7" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.451740 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8" (OuterVolumeSpecName: "kube-api-access-wn5v8") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "kube-api-access-wn5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.484486 4991 scope.go:117] "RemoveContainer" containerID="d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.531083 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.532206 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.539627 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.542253 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.552895 4991 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.552932 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5v8\" (UniqueName: \"kubernetes.io/projected/71a3f9e7-e685-480d-b40f-7dc03205099e-kube-api-access-wn5v8\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.552942 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.552971 4991 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.552980 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.559200 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config" (OuterVolumeSpecName: "config") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.563507 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71a3f9e7-e685-480d-b40f-7dc03205099e" (UID: "71a3f9e7-e685-480d-b40f-7dc03205099e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.632789 4991 scope.go:117] "RemoveContainer" containerID="2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b" Sep 29 10:04:38 crc kubenswrapper[4991]: E0929 10:04:38.634514 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b\": container with ID starting with 2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b not found: ID does not exist" containerID="2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.634548 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b"} err="failed to get container status \"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b\": rpc error: code = NotFound desc = could not find container \"2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b\": container with ID starting with 2078b5789380a4a94b15e41d133f0361d4f5781e1fb7ee9cf2ea699e2c66ed4b not found: ID does not exist" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.634625 4991 scope.go:117] "RemoveContainer" containerID="d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7" Sep 29 10:04:38 crc kubenswrapper[4991]: E0929 10:04:38.635124 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7\": container with ID starting with d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7 not found: ID does not exist" containerID="d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.635176 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7"} err="failed to get container status \"d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7\": rpc error: code = NotFound desc = could not find container \"d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7\": container with ID starting with d1b00d6a02b4849662c1fd8e99794febeee694ccfd2e51d75289150abeb151b7 not found: ID does not exist" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.655342 4991 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-config\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.655370 4991 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71a3f9e7-e685-480d-b40f-7dc03205099e-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.799221 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.818789 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-qfwdb"] Sep 29 10:04:38 crc kubenswrapper[4991]: I0929 10:04:38.939044 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" path="/var/lib/kubelet/pods/71a3f9e7-e685-480d-b40f-7dc03205099e/volumes" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.463757 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b194689f-6d7e-47aa-b796-7f0e959ce6b1","Type":"ContainerStarted","Data":"e49ff916dc5fe627045fc747684e75a012394fdaa7f6e3b17a05ee10b0771c14"} Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.506452 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.971803505 podStartE2EDuration="45.506420273s" podCreationTimestamp="2025-09-29 10:03:54 +0000 UTC" firstStartedPulling="2025-09-29 10:03:55.637493699 +0000 UTC m=+1571.493421727" lastFinishedPulling="2025-09-29 10:04:38.172110467 +0000 UTC m=+1614.028038495" observedRunningTime="2025-09-29 10:04:39.491389078 +0000 UTC m=+1615.347317126" watchObservedRunningTime="2025-09-29 10:04:39.506420273 +0000 UTC m=+1615.362348301" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.727185 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-64f8dfd87f-r5cr8"] Sep 29 10:04:39 crc kubenswrapper[4991]: E0929 10:04:39.731106 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="init" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.731153 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="init" Sep 29 10:04:39 crc kubenswrapper[4991]: E0929 10:04:39.731229 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="init" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.731242 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="init" Sep 29 10:04:39 crc kubenswrapper[4991]: E0929 10:04:39.731291 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.731302 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: E0929 10:04:39.731337 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.731347 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: E0929 10:04:39.731401 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" containerName="heat-db-sync" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.731411 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" containerName="heat-db-sync" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.736990 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a3f9e7-e685-480d-b40f-7dc03205099e" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.737059 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e665248-de57-45c0-8b0e-5ebc858626aa" containerName="dnsmasq-dns" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.737157 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" containerName="heat-db-sync" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.749380 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.817026 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64f8dfd87f-r5cr8"] Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.820930 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65b855f985-v4s5j"] Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.824427 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.835738 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b855f985-v4s5j"] Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.858042 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6548597c77-4fgjb"] Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.859577 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.871835 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6548597c77-4fgjb"] Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901181 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901271 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsq66\" (UniqueName: \"kubernetes.io/projected/a4195301-45cb-494f-82a8-756730309e65-kube-api-access-vsq66\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901308 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzs8h\" (UniqueName: \"kubernetes.io/projected/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-kube-api-access-qzs8h\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901358 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-combined-ca-bundle\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901498 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-public-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901540 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-combined-ca-bundle\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901564 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65ps\" (UniqueName: \"kubernetes.io/projected/e7a76469-fffa-42c3-9d5d-3202e335a666-kube-api-access-t65ps\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901580 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-combined-ca-bundle\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901614 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901631 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data-custom\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901657 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-internal-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901731 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901765 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-public-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901871 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-internal-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.901906 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data-custom\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:39 crc kubenswrapper[4991]: I0929 10:04:39.902036 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data-custom\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004654 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004747 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsq66\" (UniqueName: \"kubernetes.io/projected/a4195301-45cb-494f-82a8-756730309e65-kube-api-access-vsq66\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004765 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzs8h\" (UniqueName: \"kubernetes.io/projected/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-kube-api-access-qzs8h\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004817 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-combined-ca-bundle\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004870 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-public-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004898 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-combined-ca-bundle\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004927 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65ps\" (UniqueName: \"kubernetes.io/projected/e7a76469-fffa-42c3-9d5d-3202e335a666-kube-api-access-t65ps\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.004974 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-combined-ca-bundle\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005074 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data-custom\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005097 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005127 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-internal-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005172 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005207 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-public-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005284 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-internal-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005319 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data-custom\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.005394 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data-custom\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.014203 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data-custom\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.015390 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.016240 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-combined-ca-bundle\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.016460 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-public-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.016489 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data-custom\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.017677 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-config-data\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.019750 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-public-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.020253 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-internal-tls-certs\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.020895 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-config-data\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.025520 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-config-data-custom\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.028653 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-combined-ca-bundle\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.029920 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzs8h\" (UniqueName: \"kubernetes.io/projected/539b6968-90d6-45cc-ab26-cf0ab8ed31f6-kube-api-access-qzs8h\") pod \"heat-api-6548597c77-4fgjb\" (UID: \"539b6968-90d6-45cc-ab26-cf0ab8ed31f6\") " pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.030466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a76469-fffa-42c3-9d5d-3202e335a666-combined-ca-bundle\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.031166 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65ps\" (UniqueName: \"kubernetes.io/projected/e7a76469-fffa-42c3-9d5d-3202e335a666-kube-api-access-t65ps\") pod \"heat-engine-64f8dfd87f-r5cr8\" (UID: \"e7a76469-fffa-42c3-9d5d-3202e335a666\") " pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.031331 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsq66\" (UniqueName: \"kubernetes.io/projected/a4195301-45cb-494f-82a8-756730309e65-kube-api-access-vsq66\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.055793 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4195301-45cb-494f-82a8-756730309e65-internal-tls-certs\") pod \"heat-cfnapi-65b855f985-v4s5j\" (UID: \"a4195301-45cb-494f-82a8-756730309e65\") " pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.117398 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.156356 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.195999 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:40 crc kubenswrapper[4991]: W0929 10:04:40.851796 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a76469_fffa_42c3_9d5d_3202e335a666.slice/crio-0c4de46a267e8e76f2b6e9b405795dcae563dd7a4d00bdb0e6a84b6afe241601 WatchSource:0}: Error finding container 0c4de46a267e8e76f2b6e9b405795dcae563dd7a4d00bdb0e6a84b6afe241601: Status 404 returned error can't find the container with id 0c4de46a267e8e76f2b6e9b405795dcae563dd7a4d00bdb0e6a84b6afe241601 Sep 29 10:04:40 crc kubenswrapper[4991]: I0929 10:04:40.856200 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64f8dfd87f-r5cr8"] Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.005733 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6548597c77-4fgjb"] Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.026600 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65b855f985-v4s5j"] Sep 29 10:04:41 crc kubenswrapper[4991]: W0929 10:04:41.032929 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod539b6968_90d6_45cc_ab26_cf0ab8ed31f6.slice/crio-fdc651d92d33def2e1d7bbba5dafd409abed62c7c6078360ea6e2b165600d711 WatchSource:0}: Error finding container fdc651d92d33def2e1d7bbba5dafd409abed62c7c6078360ea6e2b165600d711: Status 404 returned error can't find the container with id fdc651d92d33def2e1d7bbba5dafd409abed62c7c6078360ea6e2b165600d711 Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.502316 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64f8dfd87f-r5cr8" event={"ID":"e7a76469-fffa-42c3-9d5d-3202e335a666","Type":"ContainerStarted","Data":"4b8edffe2bf225aae905a5a85a94aa19e8bdac4d8bf30325b6a26609f2731eb5"} Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.502658 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64f8dfd87f-r5cr8" event={"ID":"e7a76469-fffa-42c3-9d5d-3202e335a666","Type":"ContainerStarted","Data":"0c4de46a267e8e76f2b6e9b405795dcae563dd7a4d00bdb0e6a84b6afe241601"} Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.502855 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.507493 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6548597c77-4fgjb" event={"ID":"539b6968-90d6-45cc-ab26-cf0ab8ed31f6","Type":"ContainerStarted","Data":"fdc651d92d33def2e1d7bbba5dafd409abed62c7c6078360ea6e2b165600d711"} Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.509155 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b855f985-v4s5j" event={"ID":"a4195301-45cb-494f-82a8-756730309e65","Type":"ContainerStarted","Data":"e45ea53a9bd9298a70d4720f90d1e300d765caca3e7fe84c9f853fe887b90b18"} Sep 29 10:04:41 crc kubenswrapper[4991]: I0929 10:04:41.520492 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-64f8dfd87f-r5cr8" podStartSLOduration=2.520476522 podStartE2EDuration="2.520476522s" podCreationTimestamp="2025-09-29 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:04:41.517594976 +0000 UTC m=+1617.373523004" watchObservedRunningTime="2025-09-29 10:04:41.520476522 +0000 UTC m=+1617.376404550" Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.571574 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65b855f985-v4s5j" event={"ID":"a4195301-45cb-494f-82a8-756730309e65","Type":"ContainerStarted","Data":"f652c2fe16e0e494d33204c6582da8601744bce1f56fb5902680214eb44dabd0"} Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.572269 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.575227 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6548597c77-4fgjb" event={"ID":"539b6968-90d6-45cc-ab26-cf0ab8ed31f6","Type":"ContainerStarted","Data":"832bf8a580bafbf82bea64fcacd0686bcf8953db23bdf859b2301d45b45008cd"} Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.575368 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.603797 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65b855f985-v4s5j" podStartSLOduration=3.321198804 podStartE2EDuration="6.603774498s" podCreationTimestamp="2025-09-29 10:04:39 +0000 UTC" firstStartedPulling="2025-09-29 10:04:41.038509904 +0000 UTC m=+1616.894437932" lastFinishedPulling="2025-09-29 10:04:44.321085598 +0000 UTC m=+1620.177013626" observedRunningTime="2025-09-29 10:04:45.588265901 +0000 UTC m=+1621.444193949" watchObservedRunningTime="2025-09-29 10:04:45.603774498 +0000 UTC m=+1621.459702526" Sep 29 10:04:45 crc kubenswrapper[4991]: I0929 10:04:45.617585 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6548597c77-4fgjb" podStartSLOduration=3.336129355 podStartE2EDuration="6.61756333s" podCreationTimestamp="2025-09-29 10:04:39 +0000 UTC" firstStartedPulling="2025-09-29 10:04:41.036669335 +0000 UTC m=+1616.892597373" lastFinishedPulling="2025-09-29 10:04:44.31810332 +0000 UTC m=+1620.174031348" observedRunningTime="2025-09-29 10:04:45.607928037 +0000 UTC m=+1621.463856075" watchObservedRunningTime="2025-09-29 10:04:45.61756333 +0000 UTC m=+1621.473491358" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.297221 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r"] Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.299780 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.302228 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.302228 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.302620 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.304417 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.309889 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r"] Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.347346 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.347416 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.347504 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn8t\" (UniqueName: \"kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.347558 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.449463 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn8t\" (UniqueName: \"kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.449571 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.449807 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.449861 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.461029 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.466524 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.490849 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn8t\" (UniqueName: \"kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.492576 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:47 crc kubenswrapper[4991]: I0929 10:04:47.642634 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:04:48 crc kubenswrapper[4991]: I0929 10:04:48.489825 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r"] Sep 29 10:04:48 crc kubenswrapper[4991]: I0929 10:04:48.606256 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" event={"ID":"e5729608-f9e1-4a51-8fa4-682be8932037","Type":"ContainerStarted","Data":"8331f0d9b6796b734a1f2b2301a8c13cf1e452fab44cc966ea9d16a89863b3da"} Sep 29 10:04:51 crc kubenswrapper[4991]: I0929 10:04:51.649833 4991 generic.go:334] "Generic (PLEG): container finished" podID="bf158d99-d08f-4ce7-b60d-2c685d55a6f7" containerID="213dae745aae283ea0512fce555f5b349164bc68b7c38f1d0968c04aa0e4ea33" exitCode=0 Sep 29 10:04:51 crc kubenswrapper[4991]: I0929 10:04:51.649930 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf158d99-d08f-4ce7-b60d-2c685d55a6f7","Type":"ContainerDied","Data":"213dae745aae283ea0512fce555f5b349164bc68b7c38f1d0968c04aa0e4ea33"} Sep 29 10:04:51 crc kubenswrapper[4991]: I0929 10:04:51.654477 4991 generic.go:334] "Generic (PLEG): container finished" podID="e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0" containerID="50901b4a4a421e7f07efe981a38b57fe5479cfae62c8836be853fe9e233edf5d" exitCode=0 Sep 29 10:04:51 crc kubenswrapper[4991]: I0929 10:04:51.654522 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0","Type":"ContainerDied","Data":"50901b4a4a421e7f07efe981a38b57fe5479cfae62c8836be853fe9e233edf5d"} Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.386781 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6548597c77-4fgjb" Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.392708 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65b855f985-v4s5j" Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.472525 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.472810 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-645f5d8c84-9bf84" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerName="heat-api" containerID="cri-o://a0a7c75294a753f8c07c39a72c9a219139dd1931cfdb11c3299c1322ddf1e811" gracePeriod=60 Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.504570 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.504782 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerName="heat-cfnapi" containerID="cri-o://8959703c1ae4ae8f5bf1100ea0d579decda39a32617adfbed676c41e7de4b287" gracePeriod=60 Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.679518 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf158d99-d08f-4ce7-b60d-2c685d55a6f7","Type":"ContainerStarted","Data":"605f8a6d4e19f0802bf1c5b3bc61fb09ea65c2c4e59256d821766edb0839458f"} Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.681331 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.686012 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0","Type":"ContainerStarted","Data":"62d7527596ffa0894efe8ba413de9eed57758258e6ed627be48f88e13e7c3bba"} Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.686363 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:04:52 crc kubenswrapper[4991]: I0929 10:04:52.722098 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.722076167 podStartE2EDuration="35.722076167s" podCreationTimestamp="2025-09-29 10:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:04:52.710980696 +0000 UTC m=+1628.566908744" watchObservedRunningTime="2025-09-29 10:04:52.722076167 +0000 UTC m=+1628.578004205" Sep 29 10:04:54 crc kubenswrapper[4991]: I0929 10:04:54.954641 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.954611904 podStartE2EDuration="46.954611904s" podCreationTimestamp="2025-09-29 10:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:04:52.739293699 +0000 UTC m=+1628.595221727" watchObservedRunningTime="2025-09-29 10:04:54.954611904 +0000 UTC m=+1630.810539942" Sep 29 10:04:55 crc kubenswrapper[4991]: I0929 10:04:55.716462 4991 generic.go:334] "Generic (PLEG): container finished" podID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerID="a0a7c75294a753f8c07c39a72c9a219139dd1931cfdb11c3299c1322ddf1e811" exitCode=0 Sep 29 10:04:55 crc kubenswrapper[4991]: I0929 10:04:55.716634 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-645f5d8c84-9bf84" event={"ID":"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87","Type":"ContainerDied","Data":"a0a7c75294a753f8c07c39a72c9a219139dd1931cfdb11c3299c1322ddf1e811"} Sep 29 10:04:56 crc kubenswrapper[4991]: I0929 10:04:56.227284 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-645f5d8c84-9bf84" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.220:8004/healthcheck\": dial tcp 10.217.0.220:8004: connect: connection refused" Sep 29 10:04:56 crc kubenswrapper[4991]: I0929 10:04:56.686112 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": dial tcp 10.217.0.221:8000: connect: connection refused" Sep 29 10:04:56 crc kubenswrapper[4991]: I0929 10:04:56.731818 4991 generic.go:334] "Generic (PLEG): container finished" podID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerID="8959703c1ae4ae8f5bf1100ea0d579decda39a32617adfbed676c41e7de4b287" exitCode=0 Sep 29 10:04:56 crc kubenswrapper[4991]: I0929 10:04:56.731861 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" event={"ID":"4a949473-c3a5-4081-be57-a23bbaaa098e","Type":"ContainerDied","Data":"8959703c1ae4ae8f5bf1100ea0d579decda39a32617adfbed676c41e7de4b287"} Sep 29 10:04:59 crc kubenswrapper[4991]: I0929 10:04:59.949418 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.026322 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.098428 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhk7m\" (UniqueName: \"kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.099506 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.099875 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.099908 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.100083 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.100342 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle\") pod \"4a949473-c3a5-4081-be57-a23bbaaa098e\" (UID: \"4a949473-c3a5-4081-be57-a23bbaaa098e\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.104066 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.104115 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m" (OuterVolumeSpecName: "kube-api-access-jhk7m") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "kube-api-access-jhk7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.136789 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.170114 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data" (OuterVolumeSpecName: "config-data") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.171733 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.174651 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64f8dfd87f-r5cr8" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.193486 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4a949473-c3a5-4081-be57-a23bbaaa098e" (UID: "4a949473-c3a5-4081-be57-a23bbaaa098e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202336 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202397 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202456 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wvhl\" (UniqueName: \"kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202513 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202558 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.202609 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data\") pod \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\" (UID: \"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87\") " Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203617 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203667 4991 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203680 4991 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203692 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203702 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a949473-c3a5-4081-be57-a23bbaaa098e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.203715 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhk7m\" (UniqueName: \"kubernetes.io/projected/4a949473-c3a5-4081-be57-a23bbaaa098e-kube-api-access-jhk7m\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.207170 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl" (OuterVolumeSpecName: "kube-api-access-6wvhl") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "kube-api-access-6wvhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.211537 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.259530 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.259844 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6f5fb95b6d-nzndm" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" containerID="cri-o://db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" gracePeriod=60 Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.293527 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.296066 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data" (OuterVolumeSpecName: "config-data") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.305922 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.305967 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wvhl\" (UniqueName: \"kubernetes.io/projected/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-kube-api-access-6wvhl\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.305980 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.305988 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.310576 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.316005 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" (UID: "5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.408940 4991 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.408988 4991 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.835916 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" event={"ID":"4a949473-c3a5-4081-be57-a23bbaaa098e","Type":"ContainerDied","Data":"2442a0ac559c1229212e8bd105d8ce0be9790a9c59508f54710e18183501b9ed"} Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.836027 4991 scope.go:117] "RemoveContainer" containerID="8959703c1ae4ae8f5bf1100ea0d579decda39a32617adfbed676c41e7de4b287" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.836335 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84db4bfdbb-5j5lm" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.852065 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-645f5d8c84-9bf84" event={"ID":"5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87","Type":"ContainerDied","Data":"a33e4fc855f8a1238dc10d4e56796084d0dd97382c04cefc29e433d528d99537"} Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.852115 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-645f5d8c84-9bf84" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.877612 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" event={"ID":"e5729608-f9e1-4a51-8fa4-682be8932037","Type":"ContainerStarted","Data":"e219e2933095e1628d55e3f4f69d19dfe70bf427301bde8f884935999bb14e1c"} Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.898290 4991 scope.go:117] "RemoveContainer" containerID="a0a7c75294a753f8c07c39a72c9a219139dd1931cfdb11c3299c1322ddf1e811" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.962420 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.963321 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84db4bfdbb-5j5lm"] Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.977070 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" podStartSLOduration=2.987024834 podStartE2EDuration="13.977046955s" podCreationTimestamp="2025-09-29 10:04:47 +0000 UTC" firstStartedPulling="2025-09-29 10:04:48.497825324 +0000 UTC m=+1624.353753342" lastFinishedPulling="2025-09-29 10:04:59.487847435 +0000 UTC m=+1635.343775463" observedRunningTime="2025-09-29 10:05:00.923363666 +0000 UTC m=+1636.779291694" watchObservedRunningTime="2025-09-29 10:05:00.977046955 +0000 UTC m=+1636.832974983" Sep 29 10:05:00 crc kubenswrapper[4991]: I0929 10:05:00.997018 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:05:01 crc kubenswrapper[4991]: I0929 10:05:01.005667 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-645f5d8c84-9bf84"] Sep 29 10:05:02 crc kubenswrapper[4991]: I0929 10:05:02.940495 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" path="/var/lib/kubelet/pods/4a949473-c3a5-4081-be57-a23bbaaa098e/volumes" Sep 29 10:05:02 crc kubenswrapper[4991]: I0929 10:05:02.941805 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" path="/var/lib/kubelet/pods/5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87/volumes" Sep 29 10:05:03 crc kubenswrapper[4991]: E0929 10:05:03.107228 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:03 crc kubenswrapper[4991]: E0929 10:05:03.109111 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:03 crc kubenswrapper[4991]: E0929 10:05:03.110615 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:03 crc kubenswrapper[4991]: E0929 10:05:03.110661 4991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6f5fb95b6d-nzndm" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.150618 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-j7ntb"] Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.171558 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-j7ntb"] Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.269053 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jqg8d"] Sep 29 10:05:07 crc kubenswrapper[4991]: E0929 10:05:07.269615 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerName="heat-cfnapi" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.269637 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerName="heat-cfnapi" Sep 29 10:05:07 crc kubenswrapper[4991]: E0929 10:05:07.269657 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerName="heat-api" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.269664 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerName="heat-api" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.269910 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a949473-c3a5-4081-be57-a23bbaaa098e" containerName="heat-cfnapi" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.269934 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d4aa6-66e4-4ccf-bdc3-f95a89e89a87" containerName="heat-api" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.270775 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.296299 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jqg8d"] Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.389426 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.389732 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.390015 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mrb\" (UniqueName: \"kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.390136 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.492365 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2mrb\" (UniqueName: \"kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.492448 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.492521 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.492553 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.499757 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.500851 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.501478 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.514878 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2mrb\" (UniqueName: \"kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb\") pod \"aodh-db-sync-jqg8d\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.601065 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.862203 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.946809 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:05:07 crc kubenswrapper[4991]: I0929 10:05:07.946874 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:05:08 crc kubenswrapper[4991]: I0929 10:05:08.134837 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jqg8d"] Sep 29 10:05:08 crc kubenswrapper[4991]: I0929 10:05:08.941568 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acacb281-5dfd-4e31-b340-a8f6a7950f9b" path="/var/lib/kubelet/pods/acacb281-5dfd-4e31-b340-a8f6a7950f9b/volumes" Sep 29 10:05:09 crc kubenswrapper[4991]: I0929 10:05:09.018232 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jqg8d" event={"ID":"71c1c090-ed04-4867-b74f-c3daa921a4a1","Type":"ContainerStarted","Data":"7d87900ea2399906dff109e8676867e4d3b52d2244e1c58dad832e993a22cb83"} Sep 29 10:05:09 crc kubenswrapper[4991]: I0929 10:05:09.373197 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 29 10:05:13 crc kubenswrapper[4991]: E0929 10:05:13.107446 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:13 crc kubenswrapper[4991]: E0929 10:05:13.110931 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:13 crc kubenswrapper[4991]: E0929 10:05:13.112268 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Sep 29 10:05:13 crc kubenswrapper[4991]: E0929 10:05:13.112382 4991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6f5fb95b6d-nzndm" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" Sep 29 10:05:14 crc kubenswrapper[4991]: I0929 10:05:14.079629 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jqg8d" event={"ID":"71c1c090-ed04-4867-b74f-c3daa921a4a1","Type":"ContainerStarted","Data":"3a8c88156022bc39513a1e64f9d44312018457f9d1d9b33c976c4ea5e39a1dc4"} Sep 29 10:05:14 crc kubenswrapper[4991]: I0929 10:05:14.081883 4991 generic.go:334] "Generic (PLEG): container finished" podID="e5729608-f9e1-4a51-8fa4-682be8932037" containerID="e219e2933095e1628d55e3f4f69d19dfe70bf427301bde8f884935999bb14e1c" exitCode=0 Sep 29 10:05:14 crc kubenswrapper[4991]: I0929 10:05:14.081913 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" event={"ID":"e5729608-f9e1-4a51-8fa4-682be8932037","Type":"ContainerDied","Data":"e219e2933095e1628d55e3f4f69d19dfe70bf427301bde8f884935999bb14e1c"} Sep 29 10:05:14 crc kubenswrapper[4991]: I0929 10:05:14.099312 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jqg8d" podStartSLOduration=2.054359419 podStartE2EDuration="7.099296719s" podCreationTimestamp="2025-09-29 10:05:07 +0000 UTC" firstStartedPulling="2025-09-29 10:05:08.139229005 +0000 UTC m=+1643.995157053" lastFinishedPulling="2025-09-29 10:05:13.184166325 +0000 UTC m=+1649.040094353" observedRunningTime="2025-09-29 10:05:14.096122456 +0000 UTC m=+1649.952050484" watchObservedRunningTime="2025-09-29 10:05:14.099296719 +0000 UTC m=+1649.955224747" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.094804 4991 generic.go:334] "Generic (PLEG): container finished" podID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" exitCode=0 Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.095318 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5fb95b6d-nzndm" event={"ID":"9552748a-88cb-46f0-8fe3-545d50c9204c","Type":"ContainerDied","Data":"db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d"} Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.295332 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.304025 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle\") pod \"9552748a-88cb-46f0-8fe3-545d50c9204c\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.304163 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-566jz\" (UniqueName: \"kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz\") pod \"9552748a-88cb-46f0-8fe3-545d50c9204c\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.304211 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data\") pod \"9552748a-88cb-46f0-8fe3-545d50c9204c\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.304243 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom\") pod \"9552748a-88cb-46f0-8fe3-545d50c9204c\" (UID: \"9552748a-88cb-46f0-8fe3-545d50c9204c\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.312620 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz" (OuterVolumeSpecName: "kube-api-access-566jz") pod "9552748a-88cb-46f0-8fe3-545d50c9204c" (UID: "9552748a-88cb-46f0-8fe3-545d50c9204c"). InnerVolumeSpecName "kube-api-access-566jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.314738 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9552748a-88cb-46f0-8fe3-545d50c9204c" (UID: "9552748a-88cb-46f0-8fe3-545d50c9204c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.379583 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9552748a-88cb-46f0-8fe3-545d50c9204c" (UID: "9552748a-88cb-46f0-8fe3-545d50c9204c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.408273 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data" (OuterVolumeSpecName: "config-data") pod "9552748a-88cb-46f0-8fe3-545d50c9204c" (UID: "9552748a-88cb-46f0-8fe3-545d50c9204c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.410836 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.410865 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-566jz\" (UniqueName: \"kubernetes.io/projected/9552748a-88cb-46f0-8fe3-545d50c9204c-kube-api-access-566jz\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.410880 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.410891 4991 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9552748a-88cb-46f0-8fe3-545d50c9204c-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.544161 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.616278 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key\") pod \"e5729608-f9e1-4a51-8fa4-682be8932037\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.616428 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldn8t\" (UniqueName: \"kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t\") pod \"e5729608-f9e1-4a51-8fa4-682be8932037\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.616646 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle\") pod \"e5729608-f9e1-4a51-8fa4-682be8932037\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.616786 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory\") pod \"e5729608-f9e1-4a51-8fa4-682be8932037\" (UID: \"e5729608-f9e1-4a51-8fa4-682be8932037\") " Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.620501 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t" (OuterVolumeSpecName: "kube-api-access-ldn8t") pod "e5729608-f9e1-4a51-8fa4-682be8932037" (UID: "e5729608-f9e1-4a51-8fa4-682be8932037"). InnerVolumeSpecName "kube-api-access-ldn8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.622718 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e5729608-f9e1-4a51-8fa4-682be8932037" (UID: "e5729608-f9e1-4a51-8fa4-682be8932037"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.650901 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5729608-f9e1-4a51-8fa4-682be8932037" (UID: "e5729608-f9e1-4a51-8fa4-682be8932037"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.655364 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory" (OuterVolumeSpecName: "inventory") pod "e5729608-f9e1-4a51-8fa4-682be8932037" (UID: "e5729608-f9e1-4a51-8fa4-682be8932037"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.722535 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.722591 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldn8t\" (UniqueName: \"kubernetes.io/projected/e5729608-f9e1-4a51-8fa4-682be8932037-kube-api-access-ldn8t\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.722609 4991 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:15 crc kubenswrapper[4991]: I0929 10:05:15.722631 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5729608-f9e1-4a51-8fa4-682be8932037-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.108664 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" event={"ID":"e5729608-f9e1-4a51-8fa4-682be8932037","Type":"ContainerDied","Data":"8331f0d9b6796b734a1f2b2301a8c13cf1e452fab44cc966ea9d16a89863b3da"} Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.109864 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8331f0d9b6796b734a1f2b2301a8c13cf1e452fab44cc966ea9d16a89863b3da" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.108697 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.111847 4991 generic.go:334] "Generic (PLEG): container finished" podID="71c1c090-ed04-4867-b74f-c3daa921a4a1" containerID="3a8c88156022bc39513a1e64f9d44312018457f9d1d9b33c976c4ea5e39a1dc4" exitCode=0 Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.111932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jqg8d" event={"ID":"71c1c090-ed04-4867-b74f-c3daa921a4a1","Type":"ContainerDied","Data":"3a8c88156022bc39513a1e64f9d44312018457f9d1d9b33c976c4ea5e39a1dc4"} Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.115388 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f5fb95b6d-nzndm" event={"ID":"9552748a-88cb-46f0-8fe3-545d50c9204c","Type":"ContainerDied","Data":"e970a14b0beb3f46aef04a502600323e69d0fdc0427d7ad4d61515460a98c4c0"} Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.115432 4991 scope.go:117] "RemoveContainer" containerID="db5b533d7aa4ff2269279b2dff89ebbc5d62866e4b25dcd147dfa9ebd3209f8d" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.115486 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f5fb95b6d-nzndm" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.197401 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.210033 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6f5fb95b6d-nzndm"] Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.220836 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd"] Sep 29 10:05:16 crc kubenswrapper[4991]: E0929 10:05:16.221392 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5729608-f9e1-4a51-8fa4-682be8932037" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.221411 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5729608-f9e1-4a51-8fa4-682be8932037" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:16 crc kubenswrapper[4991]: E0929 10:05:16.221444 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.221451 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.221668 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5729608-f9e1-4a51-8fa4-682be8932037" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.221694 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" containerName="heat-engine" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.222492 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.225532 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.225702 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.226066 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.226300 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.233707 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd"] Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.236392 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.236500 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7v2d\" (UniqueName: \"kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.236550 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.338082 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.338411 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7v2d\" (UniqueName: \"kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.338547 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.342865 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.342881 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.359513 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7v2d\" (UniqueName: \"kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r9wsd\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.559705 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:16 crc kubenswrapper[4991]: I0929 10:05:16.939572 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9552748a-88cb-46f0-8fe3-545d50c9204c" path="/var/lib/kubelet/pods/9552748a-88cb-46f0-8fe3-545d50c9204c/volumes" Sep 29 10:05:17 crc kubenswrapper[4991]: W0929 10:05:17.103600 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8f23d5d_c47d_481f_a28e_941d77c414c5.slice/crio-9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839 WatchSource:0}: Error finding container 9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839: Status 404 returned error can't find the container with id 9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839 Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.104170 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd"] Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.129325 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" event={"ID":"b8f23d5d-c47d-481f-a28e-941d77c414c5","Type":"ContainerStarted","Data":"9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839"} Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.622532 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.672334 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts\") pod \"71c1c090-ed04-4867-b74f-c3daa921a4a1\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.672584 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle\") pod \"71c1c090-ed04-4867-b74f-c3daa921a4a1\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.672648 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data\") pod \"71c1c090-ed04-4867-b74f-c3daa921a4a1\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.672721 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2mrb\" (UniqueName: \"kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb\") pod \"71c1c090-ed04-4867-b74f-c3daa921a4a1\" (UID: \"71c1c090-ed04-4867-b74f-c3daa921a4a1\") " Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.677942 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts" (OuterVolumeSpecName: "scripts") pod "71c1c090-ed04-4867-b74f-c3daa921a4a1" (UID: "71c1c090-ed04-4867-b74f-c3daa921a4a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.678326 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb" (OuterVolumeSpecName: "kube-api-access-d2mrb") pod "71c1c090-ed04-4867-b74f-c3daa921a4a1" (UID: "71c1c090-ed04-4867-b74f-c3daa921a4a1"). InnerVolumeSpecName "kube-api-access-d2mrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.712686 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71c1c090-ed04-4867-b74f-c3daa921a4a1" (UID: "71c1c090-ed04-4867-b74f-c3daa921a4a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.718004 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data" (OuterVolumeSpecName: "config-data") pod "71c1c090-ed04-4867-b74f-c3daa921a4a1" (UID: "71c1c090-ed04-4867-b74f-c3daa921a4a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.775268 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.775543 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.775554 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2mrb\" (UniqueName: \"kubernetes.io/projected/71c1c090-ed04-4867-b74f-c3daa921a4a1-kube-api-access-d2mrb\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:17 crc kubenswrapper[4991]: I0929 10:05:17.775566 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71c1c090-ed04-4867-b74f-c3daa921a4a1-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:18 crc kubenswrapper[4991]: I0929 10:05:18.141657 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" event={"ID":"b8f23d5d-c47d-481f-a28e-941d77c414c5","Type":"ContainerStarted","Data":"a50e5e5b44f5cdf3d90e63ded59586a5aeaa39d1045e1fc702fee02c21269172"} Sep 29 10:05:18 crc kubenswrapper[4991]: I0929 10:05:18.145672 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jqg8d" event={"ID":"71c1c090-ed04-4867-b74f-c3daa921a4a1","Type":"ContainerDied","Data":"7d87900ea2399906dff109e8676867e4d3b52d2244e1c58dad832e993a22cb83"} Sep 29 10:05:18 crc kubenswrapper[4991]: I0929 10:05:18.145714 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d87900ea2399906dff109e8676867e4d3b52d2244e1c58dad832e993a22cb83" Sep 29 10:05:18 crc kubenswrapper[4991]: I0929 10:05:18.145771 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jqg8d" Sep 29 10:05:18 crc kubenswrapper[4991]: I0929 10:05:18.179739 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" podStartSLOduration=1.6576122070000001 podStartE2EDuration="2.179708078s" podCreationTimestamp="2025-09-29 10:05:16 +0000 UTC" firstStartedPulling="2025-09-29 10:05:17.108391734 +0000 UTC m=+1652.964319762" lastFinishedPulling="2025-09-29 10:05:17.630487605 +0000 UTC m=+1653.486415633" observedRunningTime="2025-09-29 10:05:18.169396087 +0000 UTC m=+1654.025324135" watchObservedRunningTime="2025-09-29 10:05:18.179708078 +0000 UTC m=+1654.035636116" Sep 29 10:05:19 crc kubenswrapper[4991]: I0929 10:05:19.198127 4991 scope.go:117] "RemoveContainer" containerID="c7440cb8422898707b454e9abd438855ac4ccc44df905479e8783b5603737899" Sep 29 10:05:19 crc kubenswrapper[4991]: I0929 10:05:19.318586 4991 scope.go:117] "RemoveContainer" containerID="000fd7c46e4f126c63142daeec918131e0d6d75de7c9a3b392267b1845b747cc" Sep 29 10:05:19 crc kubenswrapper[4991]: I0929 10:05:19.401042 4991 scope.go:117] "RemoveContainer" containerID="6ac03472c3c649dbf55e36728f7b38a9ed9ee7b7696981076bdfe1e0f051aeca" Sep 29 10:05:19 crc kubenswrapper[4991]: I0929 10:05:19.450040 4991 scope.go:117] "RemoveContainer" containerID="92f96f3ca218a052b2c68f302299970a7f774ed2d13706e84964fd2a91cc508e" Sep 29 10:05:21 crc kubenswrapper[4991]: I0929 10:05:21.182648 4991 generic.go:334] "Generic (PLEG): container finished" podID="b8f23d5d-c47d-481f-a28e-941d77c414c5" containerID="a50e5e5b44f5cdf3d90e63ded59586a5aeaa39d1045e1fc702fee02c21269172" exitCode=0 Sep 29 10:05:21 crc kubenswrapper[4991]: I0929 10:05:21.182731 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" event={"ID":"b8f23d5d-c47d-481f-a28e-941d77c414c5","Type":"ContainerDied","Data":"a50e5e5b44f5cdf3d90e63ded59586a5aeaa39d1045e1fc702fee02c21269172"} Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.278080 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.279486 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-api" containerID="cri-o://78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" gracePeriod=30 Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.279631 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-notifier" containerID="cri-o://b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" gracePeriod=30 Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.279682 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-evaluator" containerID="cri-o://01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" gracePeriod=30 Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.279619 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-listener" containerID="cri-o://28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" gracePeriod=30 Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.814435 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.911164 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key\") pod \"b8f23d5d-c47d-481f-a28e-941d77c414c5\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.911252 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory\") pod \"b8f23d5d-c47d-481f-a28e-941d77c414c5\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.911414 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7v2d\" (UniqueName: \"kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d\") pod \"b8f23d5d-c47d-481f-a28e-941d77c414c5\" (UID: \"b8f23d5d-c47d-481f-a28e-941d77c414c5\") " Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.917180 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d" (OuterVolumeSpecName: "kube-api-access-w7v2d") pod "b8f23d5d-c47d-481f-a28e-941d77c414c5" (UID: "b8f23d5d-c47d-481f-a28e-941d77c414c5"). InnerVolumeSpecName "kube-api-access-w7v2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.950269 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b8f23d5d-c47d-481f-a28e-941d77c414c5" (UID: "b8f23d5d-c47d-481f-a28e-941d77c414c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:22 crc kubenswrapper[4991]: I0929 10:05:22.951707 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory" (OuterVolumeSpecName: "inventory") pod "b8f23d5d-c47d-481f-a28e-941d77c414c5" (UID: "b8f23d5d-c47d-481f-a28e-941d77c414c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.014755 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.014791 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8f23d5d-c47d-481f-a28e-941d77c414c5-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.014805 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7v2d\" (UniqueName: \"kubernetes.io/projected/b8f23d5d-c47d-481f-a28e-941d77c414c5-kube-api-access-w7v2d\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.209744 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" event={"ID":"b8f23d5d-c47d-481f-a28e-941d77c414c5","Type":"ContainerDied","Data":"9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839"} Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.209778 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r9wsd" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.209788 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4e41d59031b1a2675f29a365df7b65a4605389c9574f3436c933dec7c99839" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.212843 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerID="01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" exitCode=0 Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.212882 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerID="78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" exitCode=0 Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.212902 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerDied","Data":"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8"} Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.212980 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerDied","Data":"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e"} Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.274666 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj"] Sep 29 10:05:23 crc kubenswrapper[4991]: E0929 10:05:23.275318 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c1c090-ed04-4867-b74f-c3daa921a4a1" containerName="aodh-db-sync" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.275333 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c1c090-ed04-4867-b74f-c3daa921a4a1" containerName="aodh-db-sync" Sep 29 10:05:23 crc kubenswrapper[4991]: E0929 10:05:23.275395 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f23d5d-c47d-481f-a28e-941d77c414c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.275404 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f23d5d-c47d-481f-a28e-941d77c414c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.275685 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f23d5d-c47d-481f-a28e-941d77c414c5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.275711 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c1c090-ed04-4867-b74f-c3daa921a4a1" containerName="aodh-db-sync" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.276840 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.279594 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.279980 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.280045 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.280669 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.287444 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj"] Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.320323 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dkbm\" (UniqueName: \"kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.320379 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.320464 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.320485 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.422032 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.422094 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.422272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dkbm\" (UniqueName: \"kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.422323 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.426404 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.426710 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.428174 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.444133 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dkbm\" (UniqueName: \"kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:23 crc kubenswrapper[4991]: I0929 10:05:23.620014 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:05:24 crc kubenswrapper[4991]: I0929 10:05:24.211583 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj"] Sep 29 10:05:24 crc kubenswrapper[4991]: I0929 10:05:24.227511 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" event={"ID":"4f367761-a0c4-4bc8-8d44-86dc07e3d495","Type":"ContainerStarted","Data":"36e435c7fc54dc12b0c9a5f3ea719bac3e8f89a8a647d99a1adf6e6737e5d127"} Sep 29 10:05:25 crc kubenswrapper[4991]: I0929 10:05:25.257652 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" event={"ID":"4f367761-a0c4-4bc8-8d44-86dc07e3d495","Type":"ContainerStarted","Data":"e4ff68fce936f7142f77b669ce7dbfc4e8c8b6ffd8432058857f2dfca27237fa"} Sep 29 10:05:25 crc kubenswrapper[4991]: I0929 10:05:25.275495 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" podStartSLOduration=1.776787288 podStartE2EDuration="2.275455085s" podCreationTimestamp="2025-09-29 10:05:23 +0000 UTC" firstStartedPulling="2025-09-29 10:05:24.213407254 +0000 UTC m=+1660.069335272" lastFinishedPulling="2025-09-29 10:05:24.712075041 +0000 UTC m=+1660.568003069" observedRunningTime="2025-09-29 10:05:25.273177725 +0000 UTC m=+1661.129105753" watchObservedRunningTime="2025-09-29 10:05:25.275455085 +0000 UTC m=+1661.131383113" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.087136 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.194871 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.195159 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.195218 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlcrx\" (UniqueName: \"kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.195261 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.195340 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.195373 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs\") pod \"2b7115bc-655b-4f41-9983-ecd70758ac95\" (UID: \"2b7115bc-655b-4f41-9983-ecd70758ac95\") " Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.200137 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts" (OuterVolumeSpecName: "scripts") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.210278 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx" (OuterVolumeSpecName: "kube-api-access-qlcrx") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "kube-api-access-qlcrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.269126 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278698 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerID="28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" exitCode=0 Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278729 4991 generic.go:334] "Generic (PLEG): container finished" podID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerID="b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" exitCode=0 Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278789 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278784 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerDied","Data":"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c"} Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278862 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerDied","Data":"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940"} Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278884 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2b7115bc-655b-4f41-9983-ecd70758ac95","Type":"ContainerDied","Data":"69112efa34406f52a270a0bf866f73dc540717dc6c680ddb41aaef806b427ff8"} Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.278903 4991 scope.go:117] "RemoveContainer" containerID="28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.284315 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.299829 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlcrx\" (UniqueName: \"kubernetes.io/projected/2b7115bc-655b-4f41-9983-ecd70758ac95-kube-api-access-qlcrx\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.299856 4991 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.299888 4991 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-scripts\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.299898 4991 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.342189 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data" (OuterVolumeSpecName: "config-data") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.357776 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b7115bc-655b-4f41-9983-ecd70758ac95" (UID: "2b7115bc-655b-4f41-9983-ecd70758ac95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.402633 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.402663 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7115bc-655b-4f41-9983-ecd70758ac95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.423101 4991 scope.go:117] "RemoveContainer" containerID="b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.446895 4991 scope.go:117] "RemoveContainer" containerID="01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.468457 4991 scope.go:117] "RemoveContainer" containerID="78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.493582 4991 scope.go:117] "RemoveContainer" containerID="28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.494476 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c\": container with ID starting with 28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c not found: ID does not exist" containerID="28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.494540 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c"} err="failed to get container status \"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c\": rpc error: code = NotFound desc = could not find container \"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c\": container with ID starting with 28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.494572 4991 scope.go:117] "RemoveContainer" containerID="b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.494899 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940\": container with ID starting with b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940 not found: ID does not exist" containerID="b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.494924 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940"} err="failed to get container status \"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940\": rpc error: code = NotFound desc = could not find container \"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940\": container with ID starting with b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940 not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.494939 4991 scope.go:117] "RemoveContainer" containerID="01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.495218 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8\": container with ID starting with 01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8 not found: ID does not exist" containerID="01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495251 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8"} err="failed to get container status \"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8\": rpc error: code = NotFound desc = could not find container \"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8\": container with ID starting with 01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8 not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495274 4991 scope.go:117] "RemoveContainer" containerID="78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.495541 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e\": container with ID starting with 78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e not found: ID does not exist" containerID="78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495559 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e"} err="failed to get container status \"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e\": rpc error: code = NotFound desc = could not find container \"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e\": container with ID starting with 78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495570 4991 scope.go:117] "RemoveContainer" containerID="28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495774 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c"} err="failed to get container status \"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c\": rpc error: code = NotFound desc = could not find container \"28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c\": container with ID starting with 28f1ab41b73e3e1810d8b527c0a72b4a0f23ac7c964aa4c3af09cff590d03d7c not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.495796 4991 scope.go:117] "RemoveContainer" containerID="b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.496121 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940"} err="failed to get container status \"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940\": rpc error: code = NotFound desc = could not find container \"b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940\": container with ID starting with b2525436c51969e5d473e9c950d438364a2b87155956abcb5e414584f840f940 not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.496141 4991 scope.go:117] "RemoveContainer" containerID="01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.496509 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8"} err="failed to get container status \"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8\": rpc error: code = NotFound desc = could not find container \"01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8\": container with ID starting with 01fc38757ef3bb394294a27b653922275521572494d731294fc27f990ad37ed8 not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.496528 4991 scope.go:117] "RemoveContainer" containerID="78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.496766 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e"} err="failed to get container status \"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e\": rpc error: code = NotFound desc = could not find container \"78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e\": container with ID starting with 78b50850a64d7c46c7f4c3190ca08039fb88c1dcebe1b6dad668b1d8bc658f2e not found: ID does not exist" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.611839 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.625003 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.642003 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.643091 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-evaluator" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.643275 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-evaluator" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.643440 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-api" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.643663 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-api" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.643887 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-listener" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.644000 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-listener" Sep 29 10:05:26 crc kubenswrapper[4991]: E0929 10:05:26.644220 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-notifier" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.644441 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-notifier" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.645160 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-listener" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.645320 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-notifier" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.645475 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-api" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.645603 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" containerName="aodh-evaluator" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.682665 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.682783 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.685409 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.685523 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.685623 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-qxcgx" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.685622 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.686374 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.708890 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-internal-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.709037 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.709226 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-public-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.709258 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-config-data\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.709334 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-scripts\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.709377 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkprc\" (UniqueName: \"kubernetes.io/projected/4e7c2905-fcf9-4e39-9610-63641cffb33f-kube-api-access-gkprc\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811310 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-public-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811411 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-config-data\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811493 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-scripts\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811551 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkprc\" (UniqueName: \"kubernetes.io/projected/4e7c2905-fcf9-4e39-9610-63641cffb33f-kube-api-access-gkprc\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811647 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-internal-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.811712 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.816664 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.817309 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-public-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.818370 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-scripts\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.818641 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-internal-tls-certs\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.818899 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e7c2905-fcf9-4e39-9610-63641cffb33f-config-data\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.828936 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkprc\" (UniqueName: \"kubernetes.io/projected/4e7c2905-fcf9-4e39-9610-63641cffb33f-kube-api-access-gkprc\") pod \"aodh-0\" (UID: \"4e7c2905-fcf9-4e39-9610-63641cffb33f\") " pod="openstack/aodh-0" Sep 29 10:05:26 crc kubenswrapper[4991]: I0929 10:05:26.941437 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7115bc-655b-4f41-9983-ecd70758ac95" path="/var/lib/kubelet/pods/2b7115bc-655b-4f41-9983-ecd70758ac95/volumes" Sep 29 10:05:27 crc kubenswrapper[4991]: I0929 10:05:27.006696 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Sep 29 10:05:27 crc kubenswrapper[4991]: I0929 10:05:27.491406 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Sep 29 10:05:27 crc kubenswrapper[4991]: W0929 10:05:27.494390 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e7c2905_fcf9_4e39_9610_63641cffb33f.slice/crio-5eb06a74c2e4e839ed78d37b5913d6d443f61dcb4a55aab69540f7e9e08b1884 WatchSource:0}: Error finding container 5eb06a74c2e4e839ed78d37b5913d6d443f61dcb4a55aab69540f7e9e08b1884: Status 404 returned error can't find the container with id 5eb06a74c2e4e839ed78d37b5913d6d443f61dcb4a55aab69540f7e9e08b1884 Sep 29 10:05:28 crc kubenswrapper[4991]: I0929 10:05:28.317116 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4e7c2905-fcf9-4e39-9610-63641cffb33f","Type":"ContainerStarted","Data":"46d9a7c8491fc64119077a5ffabb75603141e2a32e77af465e94b1be3edf1cb3"} Sep 29 10:05:28 crc kubenswrapper[4991]: I0929 10:05:28.317591 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4e7c2905-fcf9-4e39-9610-63641cffb33f","Type":"ContainerStarted","Data":"5eb06a74c2e4e839ed78d37b5913d6d443f61dcb4a55aab69540f7e9e08b1884"} Sep 29 10:05:29 crc kubenswrapper[4991]: I0929 10:05:29.332877 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4e7c2905-fcf9-4e39-9610-63641cffb33f","Type":"ContainerStarted","Data":"ff44f670e4134c4d7c5b9fc18409d3506a1fef3189ba0ab86844cac8db109b07"} Sep 29 10:05:30 crc kubenswrapper[4991]: I0929 10:05:30.348180 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4e7c2905-fcf9-4e39-9610-63641cffb33f","Type":"ContainerStarted","Data":"560cd330fddc5618d9566a7b24792da178e630973b9fa2ea49cbadc921b6bd45"} Sep 29 10:05:31 crc kubenswrapper[4991]: I0929 10:05:31.361680 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4e7c2905-fcf9-4e39-9610-63641cffb33f","Type":"ContainerStarted","Data":"418f989c73f2079c67e8c5290f8315e2304c7cb77ce5267d32dd77ff5deaf7dd"} Sep 29 10:05:31 crc kubenswrapper[4991]: I0929 10:05:31.393817 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.9056407960000001 podStartE2EDuration="5.393793712s" podCreationTimestamp="2025-09-29 10:05:26 +0000 UTC" firstStartedPulling="2025-09-29 10:05:27.497387494 +0000 UTC m=+1663.353315542" lastFinishedPulling="2025-09-29 10:05:30.98554043 +0000 UTC m=+1666.841468458" observedRunningTime="2025-09-29 10:05:31.381164321 +0000 UTC m=+1667.237092349" watchObservedRunningTime="2025-09-29 10:05:31.393793712 +0000 UTC m=+1667.249721730" Sep 29 10:05:37 crc kubenswrapper[4991]: I0929 10:05:37.946826 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:05:37 crc kubenswrapper[4991]: I0929 10:05:37.947277 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:05:37 crc kubenswrapper[4991]: I0929 10:05:37.947317 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:05:38 crc kubenswrapper[4991]: I0929 10:05:38.450392 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:05:38 crc kubenswrapper[4991]: I0929 10:05:38.450468 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" gracePeriod=600 Sep 29 10:05:38 crc kubenswrapper[4991]: E0929 10:05:38.578633 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:05:39 crc kubenswrapper[4991]: I0929 10:05:39.463857 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" exitCode=0 Sep 29 10:05:39 crc kubenswrapper[4991]: I0929 10:05:39.463932 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357"} Sep 29 10:05:39 crc kubenswrapper[4991]: I0929 10:05:39.464300 4991 scope.go:117] "RemoveContainer" containerID="3ad3178b322df7438724bf1bb291e672d09c64bed7c25651a01ec7bd03f3b6c1" Sep 29 10:05:39 crc kubenswrapper[4991]: I0929 10:05:39.465171 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:05:39 crc kubenswrapper[4991]: E0929 10:05:39.465553 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:05:50 crc kubenswrapper[4991]: I0929 10:05:50.926039 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:05:50 crc kubenswrapper[4991]: E0929 10:05:50.926815 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:06:03 crc kubenswrapper[4991]: I0929 10:06:03.927178 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:06:03 crc kubenswrapper[4991]: E0929 10:06:03.928231 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:06:18 crc kubenswrapper[4991]: I0929 10:06:18.926671 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:06:18 crc kubenswrapper[4991]: E0929 10:06:18.928913 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:06:31 crc kubenswrapper[4991]: I0929 10:06:31.926593 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:06:31 crc kubenswrapper[4991]: E0929 10:06:31.927969 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:06:46 crc kubenswrapper[4991]: I0929 10:06:46.926892 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:06:46 crc kubenswrapper[4991]: E0929 10:06:46.927886 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:06:58 crc kubenswrapper[4991]: I0929 10:06:58.930475 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:06:58 crc kubenswrapper[4991]: E0929 10:06:58.932026 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:07:09 crc kubenswrapper[4991]: I0929 10:07:09.926187 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:07:09 crc kubenswrapper[4991]: E0929 10:07:09.926982 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:07:19 crc kubenswrapper[4991]: I0929 10:07:19.766639 4991 scope.go:117] "RemoveContainer" containerID="42e075ebb87e6d115574389960a574bb91fb31b0550a6cfe7a4a72cbdf9c5b16" Sep 29 10:07:19 crc kubenswrapper[4991]: I0929 10:07:19.795473 4991 scope.go:117] "RemoveContainer" containerID="5c038f7b874a87d51e81eca41f5546341fde2f35d464bbdf24c007ec0d0efd47" Sep 29 10:07:19 crc kubenswrapper[4991]: I0929 10:07:19.820554 4991 scope.go:117] "RemoveContainer" containerID="6dd5541deba91ab978498a2206189db23d39710723ef5a8456562f447c089073" Sep 29 10:07:19 crc kubenswrapper[4991]: I0929 10:07:19.864506 4991 scope.go:117] "RemoveContainer" containerID="70a8963e42e3cd2756cf5c80c7f260f36ec2de2d15c4c9da904c58184129b4f5" Sep 29 10:07:24 crc kubenswrapper[4991]: I0929 10:07:24.926480 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:07:24 crc kubenswrapper[4991]: E0929 10:07:24.927456 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:07:36 crc kubenswrapper[4991]: I0929 10:07:36.927347 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:07:36 crc kubenswrapper[4991]: E0929 10:07:36.928165 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:07:47 crc kubenswrapper[4991]: I0929 10:07:47.058176 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-d9p8b"] Sep 29 10:07:47 crc kubenswrapper[4991]: I0929 10:07:47.075457 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-d9p8b"] Sep 29 10:07:47 crc kubenswrapper[4991]: I0929 10:07:47.089998 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bvdvb"] Sep 29 10:07:47 crc kubenswrapper[4991]: I0929 10:07:47.103534 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bvdvb"] Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.035211 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hbj7w"] Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.048253 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hbj7w"] Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.926847 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:07:48 crc kubenswrapper[4991]: E0929 10:07:48.927221 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.943903 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03fc3061-8749-42bc-827d-f8bdb437fe58" path="/var/lib/kubelet/pods/03fc3061-8749-42bc-827d-f8bdb437fe58/volumes" Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.945591 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c49f55-8da0-4a5b-8a85-78f2665dddf5" path="/var/lib/kubelet/pods/40c49f55-8da0-4a5b-8a85-78f2665dddf5/volumes" Sep 29 10:07:48 crc kubenswrapper[4991]: I0929 10:07:48.947455 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71eeb128-047f-4efb-b2ef-6b2e69c1f23a" path="/var/lib/kubelet/pods/71eeb128-047f-4efb-b2ef-6b2e69c1f23a/volumes" Sep 29 10:07:54 crc kubenswrapper[4991]: I0929 10:07:54.029842 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-z76bp"] Sep 29 10:07:54 crc kubenswrapper[4991]: I0929 10:07:54.042163 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-z76bp"] Sep 29 10:07:54 crc kubenswrapper[4991]: I0929 10:07:54.945463 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df4a675-bdfa-4eb3-adc8-5f318c403e1c" path="/var/lib/kubelet/pods/6df4a675-bdfa-4eb3-adc8-5f318c403e1c/volumes" Sep 29 10:07:56 crc kubenswrapper[4991]: I0929 10:07:56.045749 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-111f-account-create-njlb9"] Sep 29 10:07:56 crc kubenswrapper[4991]: I0929 10:07:56.057050 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-111f-account-create-njlb9"] Sep 29 10:07:56 crc kubenswrapper[4991]: I0929 10:07:56.939921 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca13a6c-59fc-42e9-b16d-473eaedb72b3" path="/var/lib/kubelet/pods/eca13a6c-59fc-42e9-b16d-473eaedb72b3/volumes" Sep 29 10:08:00 crc kubenswrapper[4991]: I0929 10:08:00.927258 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:08:00 crc kubenswrapper[4991]: E0929 10:08:00.927838 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.088030 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-fnx9p"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.107667 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-28jfp"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.126097 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fk2js"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.137767 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-szgsx"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.150116 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gwqgd"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.161543 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-fnx9p"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.173926 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gwqgd"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.184916 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-28jfp"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.197626 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-szgsx"] Sep 29 10:08:03 crc kubenswrapper[4991]: I0929 10:08:03.211237 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fk2js"] Sep 29 10:08:04 crc kubenswrapper[4991]: I0929 10:08:04.944569 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532938d6-313b-40f5-805c-a28638a8dd57" path="/var/lib/kubelet/pods/532938d6-313b-40f5-805c-a28638a8dd57/volumes" Sep 29 10:08:04 crc kubenswrapper[4991]: I0929 10:08:04.945521 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f450a4b-547e-4bb1-9548-9223070d3006" path="/var/lib/kubelet/pods/8f450a4b-547e-4bb1-9548-9223070d3006/volumes" Sep 29 10:08:04 crc kubenswrapper[4991]: I0929 10:08:04.946271 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ae0933-0cd5-4927-9bdd-d24f1c9055d5" path="/var/lib/kubelet/pods/93ae0933-0cd5-4927-9bdd-d24f1c9055d5/volumes" Sep 29 10:08:04 crc kubenswrapper[4991]: I0929 10:08:04.946859 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ef1f45-79ae-4826-910e-11aa5d94faaa" path="/var/lib/kubelet/pods/c6ef1f45-79ae-4826-910e-11aa5d94faaa/volumes" Sep 29 10:08:04 crc kubenswrapper[4991]: I0929 10:08:04.948061 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebea7d48-54c1-4c19-aaa0-139a6d5d6b47" path="/var/lib/kubelet/pods/ebea7d48-54c1-4c19-aaa0-139a6d5d6b47/volumes" Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.078717 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b2dd-account-create-pbwxr"] Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.089205 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-23e2-account-create-2lv8j"] Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.100437 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-74ef-account-create-kwf9t"] Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.112582 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b2dd-account-create-pbwxr"] Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.123891 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-23e2-account-create-2lv8j"] Sep 29 10:08:05 crc kubenswrapper[4991]: I0929 10:08:05.135561 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-74ef-account-create-kwf9t"] Sep 29 10:08:06 crc kubenswrapper[4991]: I0929 10:08:06.941270 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743f2d68-816c-4cc9-8d79-f2296fa2b7f1" path="/var/lib/kubelet/pods/743f2d68-816c-4cc9-8d79-f2296fa2b7f1/volumes" Sep 29 10:08:06 crc kubenswrapper[4991]: I0929 10:08:06.942339 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8fef1c-2287-4588-9ffe-09515c193ffc" path="/var/lib/kubelet/pods/9e8fef1c-2287-4588-9ffe-09515c193ffc/volumes" Sep 29 10:08:06 crc kubenswrapper[4991]: I0929 10:08:06.943641 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8370d2d-c4e2-4a53-b2b7-a0fea31e4537" path="/var/lib/kubelet/pods/c8370d2d-c4e2-4a53-b2b7-a0fea31e4537/volumes" Sep 29 10:08:15 crc kubenswrapper[4991]: I0929 10:08:15.926927 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:08:15 crc kubenswrapper[4991]: E0929 10:08:15.928467 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.002721 4991 scope.go:117] "RemoveContainer" containerID="62478d81ef8a313b51d58a7f1d0a152619e60def8a7ef70d4bce76356659983b" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.032481 4991 scope.go:117] "RemoveContainer" containerID="498322194bf59848464bc2266276127eb6affbd5d5a17abbb4a40d8438694d3d" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.107580 4991 scope.go:117] "RemoveContainer" containerID="3d9965d1e377b0e776bf2947cbfb442890f73429b5544f193c9cb4861d75fba8" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.183233 4991 scope.go:117] "RemoveContainer" containerID="62b21760df5346b1dced2b3ef66007b19050b635ba119143e09f2b6113537ea1" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.238179 4991 scope.go:117] "RemoveContainer" containerID="aa794312dc481393ee7fb8e3fd8f2bddf030c8fb6e8d622c8a62596df0ac13e1" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.293345 4991 scope.go:117] "RemoveContainer" containerID="f93d654b987e4c68807e3a3f0c2640cf6f05ffc32a2feb36dfee6adace67ca95" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.366896 4991 scope.go:117] "RemoveContainer" containerID="35a17484b805a74cc582ae3719db59382d4d124307e40e00b961d1b4d67fc0bf" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.400553 4991 scope.go:117] "RemoveContainer" containerID="8b4ca2db53e3860779be768a0e5561a21a9ae21ce452ca34b73fc182038a5f4b" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.434070 4991 scope.go:117] "RemoveContainer" containerID="1d4196330c8eec3b34706e9d3a7ed1b2085e90065193d9c66a1ee376d1abc98f" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.454621 4991 scope.go:117] "RemoveContainer" containerID="3115eeb73cd53888fca857d5084d4680aed08cacf5c54762d05539d59cc7537a" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.475628 4991 scope.go:117] "RemoveContainer" containerID="73de9b7e949bec32028473461c7ebae41b00f5e32f3856afc89eb7684243ef89" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.498597 4991 scope.go:117] "RemoveContainer" containerID="339337ea580e805337b0cd00330b8cd881d3bde7b012dcce4bd03795053de12a" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.521418 4991 scope.go:117] "RemoveContainer" containerID="98eb2231636c9b99a852c481f6fc94be3ecff34bdc907d11435cc6ff51b0c778" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.543156 4991 scope.go:117] "RemoveContainer" containerID="54fd573d566fc8964fc38692ccd0ddf99663d601664f64f37d0dcc2686164cc5" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.969645 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.973501 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:20 crc kubenswrapper[4991]: I0929 10:08:20.980712 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.104996 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.105097 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.105189 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkcd\" (UniqueName: \"kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.207281 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.207411 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qkcd\" (UniqueName: \"kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.207574 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.207855 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.208189 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.242374 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qkcd\" (UniqueName: \"kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd\") pod \"redhat-marketplace-9lps9\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.305548 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:21 crc kubenswrapper[4991]: I0929 10:08:21.837647 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:22 crc kubenswrapper[4991]: I0929 10:08:22.428253 4991 generic.go:334] "Generic (PLEG): container finished" podID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerID="c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743" exitCode=0 Sep 29 10:08:22 crc kubenswrapper[4991]: I0929 10:08:22.428350 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerDied","Data":"c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743"} Sep 29 10:08:22 crc kubenswrapper[4991]: I0929 10:08:22.428442 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerStarted","Data":"30a7ca687303e5eaa8ee216fa91240ae2ffb3715d5fc74e4ed25a5a197ae3146"} Sep 29 10:08:22 crc kubenswrapper[4991]: I0929 10:08:22.430619 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:08:24 crc kubenswrapper[4991]: I0929 10:08:24.456941 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerStarted","Data":"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62"} Sep 29 10:08:25 crc kubenswrapper[4991]: I0929 10:08:25.472831 4991 generic.go:334] "Generic (PLEG): container finished" podID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerID="7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62" exitCode=0 Sep 29 10:08:25 crc kubenswrapper[4991]: I0929 10:08:25.472892 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerDied","Data":"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62"} Sep 29 10:08:26 crc kubenswrapper[4991]: I0929 10:08:26.496401 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerStarted","Data":"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc"} Sep 29 10:08:26 crc kubenswrapper[4991]: I0929 10:08:26.529804 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9lps9" podStartSLOduration=2.93481615 podStartE2EDuration="6.529782289s" podCreationTimestamp="2025-09-29 10:08:20 +0000 UTC" firstStartedPulling="2025-09-29 10:08:22.430325531 +0000 UTC m=+1838.286253559" lastFinishedPulling="2025-09-29 10:08:26.02529167 +0000 UTC m=+1841.881219698" observedRunningTime="2025-09-29 10:08:26.514308123 +0000 UTC m=+1842.370236151" watchObservedRunningTime="2025-09-29 10:08:26.529782289 +0000 UTC m=+1842.385710317" Sep 29 10:08:27 crc kubenswrapper[4991]: I0929 10:08:27.038395 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qkb72"] Sep 29 10:08:27 crc kubenswrapper[4991]: I0929 10:08:27.052611 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qkb72"] Sep 29 10:08:28 crc kubenswrapper[4991]: I0929 10:08:28.940801 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28170ca7-9af8-4fdf-a37d-844dba824147" path="/var/lib/kubelet/pods/28170ca7-9af8-4fdf-a37d-844dba824147/volumes" Sep 29 10:08:29 crc kubenswrapper[4991]: I0929 10:08:29.926876 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:08:29 crc kubenswrapper[4991]: E0929 10:08:29.927440 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.029752 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-2a92-account-create-lsvp7"] Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.048226 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f564-account-create-kkgjq"] Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.060206 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f564-account-create-kkgjq"] Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.071023 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-2a92-account-create-lsvp7"] Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.305826 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.305906 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.368765 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.550373 4991 generic.go:334] "Generic (PLEG): container finished" podID="4f367761-a0c4-4bc8-8d44-86dc07e3d495" containerID="e4ff68fce936f7142f77b669ce7dbfc4e8c8b6ffd8432058857f2dfca27237fa" exitCode=0 Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.550492 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" event={"ID":"4f367761-a0c4-4bc8-8d44-86dc07e3d495","Type":"ContainerDied","Data":"e4ff68fce936f7142f77b669ce7dbfc4e8c8b6ffd8432058857f2dfca27237fa"} Sep 29 10:08:31 crc kubenswrapper[4991]: I0929 10:08:31.615407 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.033630 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b99b-account-create-4558r"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.044465 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-bf69-account-create-nrt9x"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.054157 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1446-account-create-25hvn"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.070542 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1446-account-create-25hvn"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.082143 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b99b-account-create-4558r"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.093289 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-bf69-account-create-nrt9x"] Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.941609 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388a587e-7633-4b2d-b4a4-4a083e06afbf" path="/var/lib/kubelet/pods/388a587e-7633-4b2d-b4a4-4a083e06afbf/volumes" Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.942489 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43435cee-d9a2-4517-94c4-0e49cdd536a2" path="/var/lib/kubelet/pods/43435cee-d9a2-4517-94c4-0e49cdd536a2/volumes" Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.943032 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a782a05-3455-4ac2-973c-34e78a249789" path="/var/lib/kubelet/pods/7a782a05-3455-4ac2-973c-34e78a249789/volumes" Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.943566 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6" path="/var/lib/kubelet/pods/9fb03f7e-0a1a-42ff-88ef-d0d7c3ba61b6/volumes" Sep 29 10:08:32 crc kubenswrapper[4991]: I0929 10:08:32.944565 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc29d8c0-2bb0-40fe-b3f3-98b30719dfed" path="/var/lib/kubelet/pods/fc29d8c0-2bb0-40fe-b3f3-98b30719dfed/volumes" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.094108 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.209489 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key\") pod \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.209556 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dkbm\" (UniqueName: \"kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm\") pod \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.209639 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle\") pod \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.210185 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory\") pod \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\" (UID: \"4f367761-a0c4-4bc8-8d44-86dc07e3d495\") " Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.216249 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm" (OuterVolumeSpecName: "kube-api-access-5dkbm") pod "4f367761-a0c4-4bc8-8d44-86dc07e3d495" (UID: "4f367761-a0c4-4bc8-8d44-86dc07e3d495"). InnerVolumeSpecName "kube-api-access-5dkbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.216374 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4f367761-a0c4-4bc8-8d44-86dc07e3d495" (UID: "4f367761-a0c4-4bc8-8d44-86dc07e3d495"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.248283 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f367761-a0c4-4bc8-8d44-86dc07e3d495" (UID: "4f367761-a0c4-4bc8-8d44-86dc07e3d495"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.249838 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory" (OuterVolumeSpecName: "inventory") pod "4f367761-a0c4-4bc8-8d44-86dc07e3d495" (UID: "4f367761-a0c4-4bc8-8d44-86dc07e3d495"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.313770 4991 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.314039 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.314156 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f367761-a0c4-4bc8-8d44-86dc07e3d495-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.314232 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dkbm\" (UniqueName: \"kubernetes.io/projected/4f367761-a0c4-4bc8-8d44-86dc07e3d495-kube-api-access-5dkbm\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.574568 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" event={"ID":"4f367761-a0c4-4bc8-8d44-86dc07e3d495","Type":"ContainerDied","Data":"36e435c7fc54dc12b0c9a5f3ea719bac3e8f89a8a647d99a1adf6e6737e5d127"} Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.574915 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e435c7fc54dc12b0c9a5f3ea719bac3e8f89a8a647d99a1adf6e6737e5d127" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.574611 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.702428 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc"] Sep 29 10:08:33 crc kubenswrapper[4991]: E0929 10:08:33.703140 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f367761-a0c4-4bc8-8d44-86dc07e3d495" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.703160 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f367761-a0c4-4bc8-8d44-86dc07e3d495" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.703492 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f367761-a0c4-4bc8-8d44-86dc07e3d495" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.704569 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.713834 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc"] Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.714525 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.714716 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.714866 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.715003 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.846301 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85cc\" (UniqueName: \"kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.846426 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.846461 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.949124 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85cc\" (UniqueName: \"kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.949268 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.949308 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.954283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.957441 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:33 crc kubenswrapper[4991]: I0929 10:08:33.975863 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85cc\" (UniqueName: \"kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-75tdc\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:34 crc kubenswrapper[4991]: I0929 10:08:34.033826 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:08:34 crc kubenswrapper[4991]: I0929 10:08:34.599287 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc"] Sep 29 10:08:34 crc kubenswrapper[4991]: I0929 10:08:34.750391 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:34 crc kubenswrapper[4991]: I0929 10:08:34.750694 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9lps9" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="registry-server" containerID="cri-o://5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc" gracePeriod=2 Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.292906 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.386858 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content\") pod \"c888338e-4968-4ad1-bc13-72e0057c1d39\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.387023 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities\") pod \"c888338e-4968-4ad1-bc13-72e0057c1d39\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.387257 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qkcd\" (UniqueName: \"kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd\") pod \"c888338e-4968-4ad1-bc13-72e0057c1d39\" (UID: \"c888338e-4968-4ad1-bc13-72e0057c1d39\") " Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.388019 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities" (OuterVolumeSpecName: "utilities") pod "c888338e-4968-4ad1-bc13-72e0057c1d39" (UID: "c888338e-4968-4ad1-bc13-72e0057c1d39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.391558 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd" (OuterVolumeSpecName: "kube-api-access-8qkcd") pod "c888338e-4968-4ad1-bc13-72e0057c1d39" (UID: "c888338e-4968-4ad1-bc13-72e0057c1d39"). InnerVolumeSpecName "kube-api-access-8qkcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.411516 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c888338e-4968-4ad1-bc13-72e0057c1d39" (UID: "c888338e-4968-4ad1-bc13-72e0057c1d39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.490379 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.490609 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c888338e-4968-4ad1-bc13-72e0057c1d39-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.490740 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qkcd\" (UniqueName: \"kubernetes.io/projected/c888338e-4968-4ad1-bc13-72e0057c1d39-kube-api-access-8qkcd\") on node \"crc\" DevicePath \"\"" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.597830 4991 generic.go:334] "Generic (PLEG): container finished" podID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerID="5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc" exitCode=0 Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.597886 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9lps9" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.597903 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerDied","Data":"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc"} Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.597957 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9lps9" event={"ID":"c888338e-4968-4ad1-bc13-72e0057c1d39","Type":"ContainerDied","Data":"30a7ca687303e5eaa8ee216fa91240ae2ffb3715d5fc74e4ed25a5a197ae3146"} Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.597976 4991 scope.go:117] "RemoveContainer" containerID="5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.600423 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" event={"ID":"2658581c-8b2e-435e-8c5a-8506dc7b2134","Type":"ContainerStarted","Data":"afbef7500eaabd9ba4e9b6bcdc9ca739e4abc1348589df5989120e421eaffafa"} Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.600502 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" event={"ID":"2658581c-8b2e-435e-8c5a-8506dc7b2134","Type":"ContainerStarted","Data":"7d5e6a316f4f4f596700e8e658c44e2b61f7c4007e3d790dfce5c9d3b3801fe6"} Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.621861 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" podStartSLOduration=2.134261628 podStartE2EDuration="2.621837503s" podCreationTimestamp="2025-09-29 10:08:33 +0000 UTC" firstStartedPulling="2025-09-29 10:08:34.605315017 +0000 UTC m=+1850.461243035" lastFinishedPulling="2025-09-29 10:08:35.092890882 +0000 UTC m=+1850.948818910" observedRunningTime="2025-09-29 10:08:35.615195479 +0000 UTC m=+1851.471123517" watchObservedRunningTime="2025-09-29 10:08:35.621837503 +0000 UTC m=+1851.477765531" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.630238 4991 scope.go:117] "RemoveContainer" containerID="7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.645279 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.656355 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9lps9"] Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.664933 4991 scope.go:117] "RemoveContainer" containerID="c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.724200 4991 scope.go:117] "RemoveContainer" containerID="5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc" Sep 29 10:08:35 crc kubenswrapper[4991]: E0929 10:08:35.725220 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc\": container with ID starting with 5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc not found: ID does not exist" containerID="5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.725265 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc"} err="failed to get container status \"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc\": rpc error: code = NotFound desc = could not find container \"5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc\": container with ID starting with 5f1487018f52914aaedf078e84015e996f0be2417f5fbbd0f84afd231b23f0bc not found: ID does not exist" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.725299 4991 scope.go:117] "RemoveContainer" containerID="7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62" Sep 29 10:08:35 crc kubenswrapper[4991]: E0929 10:08:35.725656 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62\": container with ID starting with 7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62 not found: ID does not exist" containerID="7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.725682 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62"} err="failed to get container status \"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62\": rpc error: code = NotFound desc = could not find container \"7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62\": container with ID starting with 7311cd1eb7db63ab7f6dfc3bda83a7ebfc9243b284a6123b77deed326d79bc62 not found: ID does not exist" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.725698 4991 scope.go:117] "RemoveContainer" containerID="c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743" Sep 29 10:08:35 crc kubenswrapper[4991]: E0929 10:08:35.726206 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743\": container with ID starting with c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743 not found: ID does not exist" containerID="c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743" Sep 29 10:08:35 crc kubenswrapper[4991]: I0929 10:08:35.726233 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743"} err="failed to get container status \"c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743\": rpc error: code = NotFound desc = could not find container \"c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743\": container with ID starting with c03699775dcccaf49a85465ef773bb85577612212716f1359a05e363f14ec743 not found: ID does not exist" Sep 29 10:08:36 crc kubenswrapper[4991]: I0929 10:08:36.943724 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" path="/var/lib/kubelet/pods/c888338e-4968-4ad1-bc13-72e0057c1d39/volumes" Sep 29 10:08:43 crc kubenswrapper[4991]: I0929 10:08:43.052409 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9kck9"] Sep 29 10:08:43 crc kubenswrapper[4991]: I0929 10:08:43.066657 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9kck9"] Sep 29 10:08:44 crc kubenswrapper[4991]: I0929 10:08:44.941996 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:08:44 crc kubenswrapper[4991]: E0929 10:08:44.942604 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:08:44 crc kubenswrapper[4991]: I0929 10:08:44.959272 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2031f120-9626-495a-b555-e4e960d2e4b1" path="/var/lib/kubelet/pods/2031f120-9626-495a-b555-e4e960d2e4b1/volumes" Sep 29 10:08:53 crc kubenswrapper[4991]: I0929 10:08:53.033889 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8559s"] Sep 29 10:08:53 crc kubenswrapper[4991]: I0929 10:08:53.046921 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8559s"] Sep 29 10:08:54 crc kubenswrapper[4991]: I0929 10:08:54.975669 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dc38ec-0933-47ed-8c1e-613d1d55d3d5" path="/var/lib/kubelet/pods/19dc38ec-0933-47ed-8c1e-613d1d55d3d5/volumes" Sep 29 10:08:56 crc kubenswrapper[4991]: I0929 10:08:56.927224 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:08:56 crc kubenswrapper[4991]: E0929 10:08:56.927894 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:09:05 crc kubenswrapper[4991]: I0929 10:09:05.047778 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6tq6l"] Sep 29 10:09:05 crc kubenswrapper[4991]: I0929 10:09:05.060846 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6tq6l"] Sep 29 10:09:06 crc kubenswrapper[4991]: I0929 10:09:06.942554 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec575cb9-949d-4880-a804-dfc7cb8c7eb9" path="/var/lib/kubelet/pods/ec575cb9-949d-4880-a804-dfc7cb8c7eb9/volumes" Sep 29 10:09:10 crc kubenswrapper[4991]: I0929 10:09:10.926819 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:09:10 crc kubenswrapper[4991]: E0929 10:09:10.927702 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:09:17 crc kubenswrapper[4991]: I0929 10:09:17.033586 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jx6z5"] Sep 29 10:09:17 crc kubenswrapper[4991]: I0929 10:09:17.047906 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jx6z5"] Sep 29 10:09:18 crc kubenswrapper[4991]: I0929 10:09:18.947365 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4880dbaa-8b65-4f99-929f-e9613339d1d9" path="/var/lib/kubelet/pods/4880dbaa-8b65-4f99-929f-e9613339d1d9/volumes" Sep 29 10:09:20 crc kubenswrapper[4991]: I0929 10:09:20.869183 4991 scope.go:117] "RemoveContainer" containerID="b05a1d18b9a32761c08f0e72ca473e560245447c5706e0a4a0aeba5832bcdde2" Sep 29 10:09:20 crc kubenswrapper[4991]: I0929 10:09:20.912505 4991 scope.go:117] "RemoveContainer" containerID="19c4a6214d409564d4f5a1352ae9712b631fca460bd8274e4217f7075ad8fdd7" Sep 29 10:09:20 crc kubenswrapper[4991]: I0929 10:09:20.957226 4991 scope.go:117] "RemoveContainer" containerID="0e13299e6c9eb2dfe9fdce3abff0f771ca5df59f5e805467f9c8890c75118e0b" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.023654 4991 scope.go:117] "RemoveContainer" containerID="006f5528d4ac13819ae67e1f09c4da44606e8a588ce040609b94677f72cf4b9e" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.092303 4991 scope.go:117] "RemoveContainer" containerID="9bc063df50551ab284545468df59b2cbcac7f5606775bc48de48abbde514d8d1" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.141638 4991 scope.go:117] "RemoveContainer" containerID="f8a3906b9574253615c2a6b3187f97f066249e24a9588c321efe33824851aaab" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.198021 4991 scope.go:117] "RemoveContainer" containerID="9287037ed363a7e0eeefa3e352e6a48227d02dc900d8b365288f181207296038" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.227211 4991 scope.go:117] "RemoveContainer" containerID="ae1305ea9fd9cfcca2b56a1ff4328353f52820080503f6f36201a13318f932ee" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.254059 4991 scope.go:117] "RemoveContainer" containerID="e0f49985eaf93321f99aaaf779855be0dda5e2ca134d967bdc85d53051fcfa1a" Sep 29 10:09:21 crc kubenswrapper[4991]: I0929 10:09:21.277669 4991 scope.go:117] "RemoveContainer" containerID="9854062f100819136737bca7e8e6c6df9e5789c59fa5cef1d55d0eb9d4ee6a4c" Sep 29 10:09:24 crc kubenswrapper[4991]: I0929 10:09:24.936689 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:09:24 crc kubenswrapper[4991]: E0929 10:09:24.938920 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:09:25 crc kubenswrapper[4991]: I0929 10:09:25.030258 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6ddmh"] Sep 29 10:09:25 crc kubenswrapper[4991]: I0929 10:09:25.041309 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6ddmh"] Sep 29 10:09:26 crc kubenswrapper[4991]: I0929 10:09:26.940033 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed2429f-e06f-4c9f-9c92-84203d8073c1" path="/var/lib/kubelet/pods/2ed2429f-e06f-4c9f-9c92-84203d8073c1/volumes" Sep 29 10:09:37 crc kubenswrapper[4991]: I0929 10:09:37.046350 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d5lhk"] Sep 29 10:09:37 crc kubenswrapper[4991]: I0929 10:09:37.057909 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d5lhk"] Sep 29 10:09:38 crc kubenswrapper[4991]: I0929 10:09:38.926231 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:09:38 crc kubenswrapper[4991]: E0929 10:09:38.926748 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:09:38 crc kubenswrapper[4991]: I0929 10:09:38.939715 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5146a80a-6cba-46a8-85b8-1fb0dd8304cd" path="/var/lib/kubelet/pods/5146a80a-6cba-46a8-85b8-1fb0dd8304cd/volumes" Sep 29 10:09:53 crc kubenswrapper[4991]: I0929 10:09:53.927022 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:09:53 crc kubenswrapper[4991]: E0929 10:09:53.927924 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:10:08 crc kubenswrapper[4991]: I0929 10:10:08.926904 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:10:08 crc kubenswrapper[4991]: E0929 10:10:08.927612 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:10:21 crc kubenswrapper[4991]: I0929 10:10:21.524784 4991 scope.go:117] "RemoveContainer" containerID="d07ae8509a425d6dce6b0fc826409f04218e45185fa879870b56ea152e6e382d" Sep 29 10:10:21 crc kubenswrapper[4991]: I0929 10:10:21.609088 4991 scope.go:117] "RemoveContainer" containerID="2be37942a036c86434a9a8a44b5e03bf789b3b99a06252d7d0bb3b6bfacf42f2" Sep 29 10:10:22 crc kubenswrapper[4991]: I0929 10:10:22.927892 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:10:22 crc kubenswrapper[4991]: E0929 10:10:22.928899 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.055778 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ctg5h"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.068426 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wf6zr"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.084001 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7vj6k"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.095330 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ctg5h"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.109026 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7vj6k"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.126488 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wf6zr"] Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.941468 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e442c8-9bfb-4371-a72e-d5955aa090ff" path="/var/lib/kubelet/pods/12e442c8-9bfb-4371-a72e-d5955aa090ff/volumes" Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.942533 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90361b49-721c-4048-982c-8ed28d5e12e4" path="/var/lib/kubelet/pods/90361b49-721c-4048-982c-8ed28d5e12e4/volumes" Sep 29 10:10:34 crc kubenswrapper[4991]: I0929 10:10:34.943168 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc26898-60ac-4c45-9225-01223289f940" path="/var/lib/kubelet/pods/acc26898-60ac-4c45-9225-01223289f940/volumes" Sep 29 10:10:36 crc kubenswrapper[4991]: I0929 10:10:36.926516 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:10:36 crc kubenswrapper[4991]: E0929 10:10:36.926944 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:10:43 crc kubenswrapper[4991]: I0929 10:10:43.052389 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a89b-account-create-8qtf5"] Sep 29 10:10:43 crc kubenswrapper[4991]: I0929 10:10:43.063007 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a89b-account-create-8qtf5"] Sep 29 10:10:44 crc kubenswrapper[4991]: I0929 10:10:44.053524 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6d58-account-create-l5cjc"] Sep 29 10:10:44 crc kubenswrapper[4991]: I0929 10:10:44.065936 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6d58-account-create-l5cjc"] Sep 29 10:10:44 crc kubenswrapper[4991]: I0929 10:10:44.942101 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c526f97-b986-4101-be9d-1bc70bbc93f5" path="/var/lib/kubelet/pods/5c526f97-b986-4101-be9d-1bc70bbc93f5/volumes" Sep 29 10:10:44 crc kubenswrapper[4991]: I0929 10:10:44.942995 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e" path="/var/lib/kubelet/pods/b0a2fea9-1578-4a8a-b9b8-253d3bad7b8e/volumes" Sep 29 10:10:47 crc kubenswrapper[4991]: I0929 10:10:47.020503 4991 generic.go:334] "Generic (PLEG): container finished" podID="2658581c-8b2e-435e-8c5a-8506dc7b2134" containerID="afbef7500eaabd9ba4e9b6bcdc9ca739e4abc1348589df5989120e421eaffafa" exitCode=0 Sep 29 10:10:47 crc kubenswrapper[4991]: I0929 10:10:47.020543 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" event={"ID":"2658581c-8b2e-435e-8c5a-8506dc7b2134","Type":"ContainerDied","Data":"afbef7500eaabd9ba4e9b6bcdc9ca739e4abc1348589df5989120e421eaffafa"} Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.584979 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.683085 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory\") pod \"2658581c-8b2e-435e-8c5a-8506dc7b2134\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.683277 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key\") pod \"2658581c-8b2e-435e-8c5a-8506dc7b2134\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.683487 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85cc\" (UniqueName: \"kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc\") pod \"2658581c-8b2e-435e-8c5a-8506dc7b2134\" (UID: \"2658581c-8b2e-435e-8c5a-8506dc7b2134\") " Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.688474 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc" (OuterVolumeSpecName: "kube-api-access-q85cc") pod "2658581c-8b2e-435e-8c5a-8506dc7b2134" (UID: "2658581c-8b2e-435e-8c5a-8506dc7b2134"). InnerVolumeSpecName "kube-api-access-q85cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.715847 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory" (OuterVolumeSpecName: "inventory") pod "2658581c-8b2e-435e-8c5a-8506dc7b2134" (UID: "2658581c-8b2e-435e-8c5a-8506dc7b2134"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.716275 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2658581c-8b2e-435e-8c5a-8506dc7b2134" (UID: "2658581c-8b2e-435e-8c5a-8506dc7b2134"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.786479 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.786530 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2658581c-8b2e-435e-8c5a-8506dc7b2134-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.786543 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85cc\" (UniqueName: \"kubernetes.io/projected/2658581c-8b2e-435e-8c5a-8506dc7b2134-kube-api-access-q85cc\") on node \"crc\" DevicePath \"\"" Sep 29 10:10:48 crc kubenswrapper[4991]: I0929 10:10:48.926200 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.054034 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" event={"ID":"2658581c-8b2e-435e-8c5a-8506dc7b2134","Type":"ContainerDied","Data":"7d5e6a316f4f4f596700e8e658c44e2b61f7c4007e3d790dfce5c9d3b3801fe6"} Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.054081 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5e6a316f4f4f596700e8e658c44e2b61f7c4007e3d790dfce5c9d3b3801fe6" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.054128 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-75tdc" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.130286 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj"] Sep 29 10:10:49 crc kubenswrapper[4991]: E0929 10:10:49.131105 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="extract-content" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131120 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="extract-content" Sep 29 10:10:49 crc kubenswrapper[4991]: E0929 10:10:49.131134 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="extract-utilities" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131142 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="extract-utilities" Sep 29 10:10:49 crc kubenswrapper[4991]: E0929 10:10:49.131159 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2658581c-8b2e-435e-8c5a-8506dc7b2134" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131169 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2658581c-8b2e-435e-8c5a-8506dc7b2134" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 29 10:10:49 crc kubenswrapper[4991]: E0929 10:10:49.131205 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="registry-server" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131213 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="registry-server" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131441 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c888338e-4968-4ad1-bc13-72e0057c1d39" containerName="registry-server" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.131464 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2658581c-8b2e-435e-8c5a-8506dc7b2134" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.132234 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.134756 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.135006 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.137529 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.138514 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.150303 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj"] Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.299178 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.300089 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltv96\" (UniqueName: \"kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.300226 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.402351 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.402607 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltv96\" (UniqueName: \"kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.402673 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.407809 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.408104 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.426115 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltv96\" (UniqueName: \"kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:49 crc kubenswrapper[4991]: I0929 10:10:49.455941 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:10:50 crc kubenswrapper[4991]: I0929 10:10:50.031134 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj"] Sep 29 10:10:50 crc kubenswrapper[4991]: I0929 10:10:50.067652 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" event={"ID":"0c180725-3745-4c34-b5a2-2aa954097b80","Type":"ContainerStarted","Data":"7cd79959aca13d2efde5132aa6e7132d652f33dfdca36d61bb84a6313381b02a"} Sep 29 10:10:50 crc kubenswrapper[4991]: I0929 10:10:50.073990 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930"} Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.088232 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" event={"ID":"0c180725-3745-4c34-b5a2-2aa954097b80","Type":"ContainerStarted","Data":"798e26c2c9ddcf2034e01dea21b9e094a05b7dc7392df668e3658ff4a533162d"} Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.114426 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" podStartSLOduration=1.630208468 podStartE2EDuration="2.114393662s" podCreationTimestamp="2025-09-29 10:10:49 +0000 UTC" firstStartedPulling="2025-09-29 10:10:50.038084261 +0000 UTC m=+1985.894012289" lastFinishedPulling="2025-09-29 10:10:50.522269455 +0000 UTC m=+1986.378197483" observedRunningTime="2025-09-29 10:10:51.108559359 +0000 UTC m=+1986.964487387" watchObservedRunningTime="2025-09-29 10:10:51.114393662 +0000 UTC m=+1986.970321690" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.856522 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.860001 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.870085 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnq94\" (UniqueName: \"kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.870149 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.870181 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.876487 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.972052 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnq94\" (UniqueName: \"kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.972129 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.972168 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.972982 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.973060 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:51 crc kubenswrapper[4991]: I0929 10:10:51.993264 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnq94\" (UniqueName: \"kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94\") pod \"redhat-operators-j44g2\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:52 crc kubenswrapper[4991]: I0929 10:10:52.183396 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:10:52 crc kubenswrapper[4991]: I0929 10:10:52.695986 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:10:53 crc kubenswrapper[4991]: I0929 10:10:53.109107 4991 generic.go:334] "Generic (PLEG): container finished" podID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerID="1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3" exitCode=0 Sep 29 10:10:53 crc kubenswrapper[4991]: I0929 10:10:53.109202 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerDied","Data":"1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3"} Sep 29 10:10:53 crc kubenswrapper[4991]: I0929 10:10:53.109446 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerStarted","Data":"c04bce0374cb29ec862c40145006e556c27de6dc65db15b5452558eb4a0338e4"} Sep 29 10:10:54 crc kubenswrapper[4991]: I0929 10:10:54.046149 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-544e-account-create-jnp4m"] Sep 29 10:10:54 crc kubenswrapper[4991]: I0929 10:10:54.056186 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-544e-account-create-jnp4m"] Sep 29 10:10:54 crc kubenswrapper[4991]: I0929 10:10:54.940916 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd62d55-78b1-4af7-867c-96f7e480c3ac" path="/var/lib/kubelet/pods/2dd62d55-78b1-4af7-867c-96f7e480c3ac/volumes" Sep 29 10:10:55 crc kubenswrapper[4991]: I0929 10:10:55.134853 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerStarted","Data":"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164"} Sep 29 10:10:59 crc kubenswrapper[4991]: I0929 10:10:59.192470 4991 generic.go:334] "Generic (PLEG): container finished" podID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerID="393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164" exitCode=0 Sep 29 10:10:59 crc kubenswrapper[4991]: I0929 10:10:59.192554 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerDied","Data":"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164"} Sep 29 10:11:01 crc kubenswrapper[4991]: I0929 10:11:01.218229 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerStarted","Data":"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1"} Sep 29 10:11:01 crc kubenswrapper[4991]: I0929 10:11:01.246791 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j44g2" podStartSLOduration=3.33895325 podStartE2EDuration="10.246744092s" podCreationTimestamp="2025-09-29 10:10:51 +0000 UTC" firstStartedPulling="2025-09-29 10:10:53.11058271 +0000 UTC m=+1988.966510728" lastFinishedPulling="2025-09-29 10:11:00.018373542 +0000 UTC m=+1995.874301570" observedRunningTime="2025-09-29 10:11:01.245463949 +0000 UTC m=+1997.101391987" watchObservedRunningTime="2025-09-29 10:11:01.246744092 +0000 UTC m=+1997.102672120" Sep 29 10:11:02 crc kubenswrapper[4991]: I0929 10:11:02.183758 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:02 crc kubenswrapper[4991]: I0929 10:11:02.184028 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:03 crc kubenswrapper[4991]: I0929 10:11:03.227997 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j44g2" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="registry-server" probeResult="failure" output=< Sep 29 10:11:03 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:11:03 crc kubenswrapper[4991]: > Sep 29 10:11:10 crc kubenswrapper[4991]: I0929 10:11:10.050489 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-xzlwh"] Sep 29 10:11:10 crc kubenswrapper[4991]: I0929 10:11:10.061352 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-xzlwh"] Sep 29 10:11:10 crc kubenswrapper[4991]: I0929 10:11:10.939272 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107def10-c759-4d94-9f77-1c97aa9005ba" path="/var/lib/kubelet/pods/107def10-c759-4d94-9f77-1c97aa9005ba/volumes" Sep 29 10:11:12 crc kubenswrapper[4991]: I0929 10:11:12.232027 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:12 crc kubenswrapper[4991]: I0929 10:11:12.280485 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:12 crc kubenswrapper[4991]: I0929 10:11:12.479339 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:11:13 crc kubenswrapper[4991]: I0929 10:11:13.376183 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j44g2" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="registry-server" containerID="cri-o://208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1" gracePeriod=2 Sep 29 10:11:13 crc kubenswrapper[4991]: I0929 10:11:13.902142 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.028371 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities\") pod \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.028475 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnq94\" (UniqueName: \"kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94\") pod \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.028698 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content\") pod \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\" (UID: \"5bed7cdc-6fad-4dce-a011-d33c1e767e6a\") " Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.029421 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities" (OuterVolumeSpecName: "utilities") pod "5bed7cdc-6fad-4dce-a011-d33c1e767e6a" (UID: "5bed7cdc-6fad-4dce-a011-d33c1e767e6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.031113 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.036632 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94" (OuterVolumeSpecName: "kube-api-access-hnq94") pod "5bed7cdc-6fad-4dce-a011-d33c1e767e6a" (UID: "5bed7cdc-6fad-4dce-a011-d33c1e767e6a"). InnerVolumeSpecName "kube-api-access-hnq94". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.127716 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bed7cdc-6fad-4dce-a011-d33c1e767e6a" (UID: "5bed7cdc-6fad-4dce-a011-d33c1e767e6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.133652 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnq94\" (UniqueName: \"kubernetes.io/projected/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-kube-api-access-hnq94\") on node \"crc\" DevicePath \"\"" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.133693 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bed7cdc-6fad-4dce-a011-d33c1e767e6a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.389894 4991 generic.go:334] "Generic (PLEG): container finished" podID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerID="208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1" exitCode=0 Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.389985 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j44g2" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.390002 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerDied","Data":"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1"} Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.390477 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j44g2" event={"ID":"5bed7cdc-6fad-4dce-a011-d33c1e767e6a","Type":"ContainerDied","Data":"c04bce0374cb29ec862c40145006e556c27de6dc65db15b5452558eb4a0338e4"} Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.390501 4991 scope.go:117] "RemoveContainer" containerID="208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.431944 4991 scope.go:117] "RemoveContainer" containerID="393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.433697 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.444918 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j44g2"] Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.457544 4991 scope.go:117] "RemoveContainer" containerID="1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.512240 4991 scope.go:117] "RemoveContainer" containerID="208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1" Sep 29 10:11:14 crc kubenswrapper[4991]: E0929 10:11:14.512764 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1\": container with ID starting with 208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1 not found: ID does not exist" containerID="208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.512793 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1"} err="failed to get container status \"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1\": rpc error: code = NotFound desc = could not find container \"208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1\": container with ID starting with 208794321b471e05c281ca4924c43af3acba8a4588fced6489198cd2f32b13b1 not found: ID does not exist" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.512815 4991 scope.go:117] "RemoveContainer" containerID="393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164" Sep 29 10:11:14 crc kubenswrapper[4991]: E0929 10:11:14.513215 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164\": container with ID starting with 393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164 not found: ID does not exist" containerID="393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.513265 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164"} err="failed to get container status \"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164\": rpc error: code = NotFound desc = could not find container \"393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164\": container with ID starting with 393af9ddf95b5b4ecd6b46806bc7c7621f0b4bf829dbca252da3c4f4f84b0164 not found: ID does not exist" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.513297 4991 scope.go:117] "RemoveContainer" containerID="1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3" Sep 29 10:11:14 crc kubenswrapper[4991]: E0929 10:11:14.513588 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3\": container with ID starting with 1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3 not found: ID does not exist" containerID="1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.513619 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3"} err="failed to get container status \"1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3\": rpc error: code = NotFound desc = could not find container \"1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3\": container with ID starting with 1376c2e27ed54ea8603d9f396af0f6a6edbcaa3e9566e1d1824df5b4f24883c3 not found: ID does not exist" Sep 29 10:11:14 crc kubenswrapper[4991]: I0929 10:11:14.941556 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" path="/var/lib/kubelet/pods/5bed7cdc-6fad-4dce-a011-d33c1e767e6a/volumes" Sep 29 10:11:21 crc kubenswrapper[4991]: I0929 10:11:21.795557 4991 scope.go:117] "RemoveContainer" containerID="6ec8d2a158fd054320043e19235f11ebeae29cf93c59628fc25b9d1f94010966" Sep 29 10:11:21 crc kubenswrapper[4991]: I0929 10:11:21.823745 4991 scope.go:117] "RemoveContainer" containerID="44b3d420c3c6b2107643732df4461a94920ad2803284bb960af5c98e188b0383" Sep 29 10:11:21 crc kubenswrapper[4991]: I0929 10:11:21.882681 4991 scope.go:117] "RemoveContainer" containerID="1789c205c8ab9539bea6c9f107bec91dca9f24c9039f9b7317d3171f9b107402" Sep 29 10:11:21 crc kubenswrapper[4991]: I0929 10:11:21.946294 4991 scope.go:117] "RemoveContainer" containerID="b78c6b0f184df95f9752141dbe2e3f8248985d155f3fa8e698132aa55832a83d" Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.010014 4991 scope.go:117] "RemoveContainer" containerID="c1624f3d5e0f874adaa7714fca14eb2e4d7db4fc536775b99ea60cab2da52984" Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.079569 4991 scope.go:117] "RemoveContainer" containerID="17a33c998709d5646413299975c5067ad5059866f91f86c3c9efd7da470e461d" Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.106185 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pc9tx"] Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.118031 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pc9tx"] Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.139045 4991 scope.go:117] "RemoveContainer" containerID="d2013d5070a201a30f7ab351c129fca33598d9962b6b7a01b61681d70af68af2" Sep 29 10:11:22 crc kubenswrapper[4991]: I0929 10:11:22.939007 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491bdd73-b7c6-4b9b-a875-aa8f1ba5a191" path="/var/lib/kubelet/pods/491bdd73-b7c6-4b9b-a875-aa8f1ba5a191/volumes" Sep 29 10:11:27 crc kubenswrapper[4991]: I0929 10:11:27.031075 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-6b11-account-create-jpx6j"] Sep 29 10:11:27 crc kubenswrapper[4991]: I0929 10:11:27.041145 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-6b11-account-create-jpx6j"] Sep 29 10:11:28 crc kubenswrapper[4991]: I0929 10:11:28.939928 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2146a65-487e-4c07-b953-5dbc2c490f3b" path="/var/lib/kubelet/pods/a2146a65-487e-4c07-b953-5dbc2c490f3b/volumes" Sep 29 10:11:49 crc kubenswrapper[4991]: I0929 10:11:49.051748 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qnxc"] Sep 29 10:11:49 crc kubenswrapper[4991]: I0929 10:11:49.069462 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7qnxc"] Sep 29 10:11:50 crc kubenswrapper[4991]: I0929 10:11:50.939241 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25df2bc4-583e-4075-9ed2-49128b9b8d2f" path="/var/lib/kubelet/pods/25df2bc4-583e-4075-9ed2-49128b9b8d2f/volumes" Sep 29 10:11:52 crc kubenswrapper[4991]: I0929 10:11:52.042334 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8q4rd"] Sep 29 10:11:52 crc kubenswrapper[4991]: I0929 10:11:52.062687 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8q4rd"] Sep 29 10:11:52 crc kubenswrapper[4991]: I0929 10:11:52.938440 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f93a627-850e-4991-aa3f-82372989186d" path="/var/lib/kubelet/pods/9f93a627-850e-4991-aa3f-82372989186d/volumes" Sep 29 10:12:11 crc kubenswrapper[4991]: I0929 10:12:11.013222 4991 generic.go:334] "Generic (PLEG): container finished" podID="0c180725-3745-4c34-b5a2-2aa954097b80" containerID="798e26c2c9ddcf2034e01dea21b9e094a05b7dc7392df668e3658ff4a533162d" exitCode=0 Sep 29 10:12:11 crc kubenswrapper[4991]: I0929 10:12:11.013321 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" event={"ID":"0c180725-3745-4c34-b5a2-2aa954097b80","Type":"ContainerDied","Data":"798e26c2c9ddcf2034e01dea21b9e094a05b7dc7392df668e3658ff4a533162d"} Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.534858 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.626513 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltv96\" (UniqueName: \"kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96\") pod \"0c180725-3745-4c34-b5a2-2aa954097b80\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.626646 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key\") pod \"0c180725-3745-4c34-b5a2-2aa954097b80\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.626747 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory\") pod \"0c180725-3745-4c34-b5a2-2aa954097b80\" (UID: \"0c180725-3745-4c34-b5a2-2aa954097b80\") " Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.632602 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96" (OuterVolumeSpecName: "kube-api-access-ltv96") pod "0c180725-3745-4c34-b5a2-2aa954097b80" (UID: "0c180725-3745-4c34-b5a2-2aa954097b80"). InnerVolumeSpecName "kube-api-access-ltv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.662164 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory" (OuterVolumeSpecName: "inventory") pod "0c180725-3745-4c34-b5a2-2aa954097b80" (UID: "0c180725-3745-4c34-b5a2-2aa954097b80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.663835 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0c180725-3745-4c34-b5a2-2aa954097b80" (UID: "0c180725-3745-4c34-b5a2-2aa954097b80"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.730135 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.730436 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c180725-3745-4c34-b5a2-2aa954097b80-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:12 crc kubenswrapper[4991]: I0929 10:12:12.730553 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltv96\" (UniqueName: \"kubernetes.io/projected/0c180725-3745-4c34-b5a2-2aa954097b80-kube-api-access-ltv96\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.034626 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" event={"ID":"0c180725-3745-4c34-b5a2-2aa954097b80","Type":"ContainerDied","Data":"7cd79959aca13d2efde5132aa6e7132d652f33dfdca36d61bb84a6313381b02a"} Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.034667 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd79959aca13d2efde5132aa6e7132d652f33dfdca36d61bb84a6313381b02a" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.034723 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.129220 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh"] Sep 29 10:12:13 crc kubenswrapper[4991]: E0929 10:12:13.130052 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="extract-content" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.130176 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="extract-content" Sep 29 10:12:13 crc kubenswrapper[4991]: E0929 10:12:13.130253 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c180725-3745-4c34-b5a2-2aa954097b80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.130331 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c180725-3745-4c34-b5a2-2aa954097b80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:13 crc kubenswrapper[4991]: E0929 10:12:13.130408 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="extract-utilities" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.130481 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="extract-utilities" Sep 29 10:12:13 crc kubenswrapper[4991]: E0929 10:12:13.130626 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="registry-server" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.130711 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="registry-server" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.131107 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c180725-3745-4c34-b5a2-2aa954097b80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.131219 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bed7cdc-6fad-4dce-a011-d33c1e767e6a" containerName="registry-server" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.132478 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.139114 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.139400 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.139422 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.139621 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.152392 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh"] Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.242340 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvv7q\" (UniqueName: \"kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.242415 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.242447 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.345072 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvv7q\" (UniqueName: \"kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.345142 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.345179 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.348806 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.349002 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.364759 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvv7q\" (UniqueName: \"kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:13 crc kubenswrapper[4991]: I0929 10:12:13.457856 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.034414 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh"] Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.049320 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" event={"ID":"e4f55ea7-928c-4334-82da-f530b8a0d8a8","Type":"ContainerStarted","Data":"51b788ae009dd0d66e16e5eb5b4f8af42dc7f961014984455ee1fd8cce36893f"} Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.383644 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.387425 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.394652 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.475543 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.475752 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.475816 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj46m\" (UniqueName: \"kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.577530 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.578009 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.578130 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj46m\" (UniqueName: \"kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.578567 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.578623 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.599993 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj46m\" (UniqueName: \"kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m\") pod \"community-operators-wvnsg\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:14 crc kubenswrapper[4991]: I0929 10:12:14.723530 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:15 crc kubenswrapper[4991]: I0929 10:12:15.064317 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" event={"ID":"e4f55ea7-928c-4334-82da-f530b8a0d8a8","Type":"ContainerStarted","Data":"545b991183afb315607168b40e18b1ded458a982fcdefd5e5a91ec090730e712"} Sep 29 10:12:15 crc kubenswrapper[4991]: W0929 10:12:15.329681 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6e70cc7_9906_4662_b3b0_08cab9635b58.slice/crio-256ad23decb8281f07973a11d0adecb92e1a6d5e6d4684eedef6ded4dc06459f WatchSource:0}: Error finding container 256ad23decb8281f07973a11d0adecb92e1a6d5e6d4684eedef6ded4dc06459f: Status 404 returned error can't find the container with id 256ad23decb8281f07973a11d0adecb92e1a6d5e6d4684eedef6ded4dc06459f Sep 29 10:12:15 crc kubenswrapper[4991]: I0929 10:12:15.330526 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" podStartSLOduration=1.874387392 podStartE2EDuration="2.33049868s" podCreationTimestamp="2025-09-29 10:12:13 +0000 UTC" firstStartedPulling="2025-09-29 10:12:14.034910365 +0000 UTC m=+2069.890838403" lastFinishedPulling="2025-09-29 10:12:14.491021663 +0000 UTC m=+2070.346949691" observedRunningTime="2025-09-29 10:12:15.086388435 +0000 UTC m=+2070.942316463" watchObservedRunningTime="2025-09-29 10:12:15.33049868 +0000 UTC m=+2071.186426708" Sep 29 10:12:15 crc kubenswrapper[4991]: I0929 10:12:15.331527 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:16 crc kubenswrapper[4991]: I0929 10:12:16.078052 4991 generic.go:334] "Generic (PLEG): container finished" podID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerID="8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40" exitCode=0 Sep 29 10:12:16 crc kubenswrapper[4991]: I0929 10:12:16.078115 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerDied","Data":"8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40"} Sep 29 10:12:16 crc kubenswrapper[4991]: I0929 10:12:16.078344 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerStarted","Data":"256ad23decb8281f07973a11d0adecb92e1a6d5e6d4684eedef6ded4dc06459f"} Sep 29 10:12:18 crc kubenswrapper[4991]: I0929 10:12:18.104870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerStarted","Data":"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0"} Sep 29 10:12:19 crc kubenswrapper[4991]: I0929 10:12:19.122466 4991 generic.go:334] "Generic (PLEG): container finished" podID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerID="e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0" exitCode=0 Sep 29 10:12:19 crc kubenswrapper[4991]: I0929 10:12:19.122516 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerDied","Data":"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0"} Sep 29 10:12:20 crc kubenswrapper[4991]: I0929 10:12:20.136509 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerStarted","Data":"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db"} Sep 29 10:12:20 crc kubenswrapper[4991]: I0929 10:12:20.138734 4991 generic.go:334] "Generic (PLEG): container finished" podID="e4f55ea7-928c-4334-82da-f530b8a0d8a8" containerID="545b991183afb315607168b40e18b1ded458a982fcdefd5e5a91ec090730e712" exitCode=0 Sep 29 10:12:20 crc kubenswrapper[4991]: I0929 10:12:20.138786 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" event={"ID":"e4f55ea7-928c-4334-82da-f530b8a0d8a8","Type":"ContainerDied","Data":"545b991183afb315607168b40e18b1ded458a982fcdefd5e5a91ec090730e712"} Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.184501 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvnsg" podStartSLOduration=3.576789519 podStartE2EDuration="7.184481441s" podCreationTimestamp="2025-09-29 10:12:14 +0000 UTC" firstStartedPulling="2025-09-29 10:12:16.082593013 +0000 UTC m=+2071.938521041" lastFinishedPulling="2025-09-29 10:12:19.690284935 +0000 UTC m=+2075.546212963" observedRunningTime="2025-09-29 10:12:21.172814005 +0000 UTC m=+2077.028742033" watchObservedRunningTime="2025-09-29 10:12:21.184481441 +0000 UTC m=+2077.040409469" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.678199 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.771214 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory\") pod \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.771568 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvv7q\" (UniqueName: \"kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q\") pod \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.771754 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key\") pod \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\" (UID: \"e4f55ea7-928c-4334-82da-f530b8a0d8a8\") " Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.778350 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q" (OuterVolumeSpecName: "kube-api-access-qvv7q") pod "e4f55ea7-928c-4334-82da-f530b8a0d8a8" (UID: "e4f55ea7-928c-4334-82da-f530b8a0d8a8"). InnerVolumeSpecName "kube-api-access-qvv7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.804632 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e4f55ea7-928c-4334-82da-f530b8a0d8a8" (UID: "e4f55ea7-928c-4334-82da-f530b8a0d8a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.805771 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory" (OuterVolumeSpecName: "inventory") pod "e4f55ea7-928c-4334-82da-f530b8a0d8a8" (UID: "e4f55ea7-928c-4334-82da-f530b8a0d8a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.874205 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvv7q\" (UniqueName: \"kubernetes.io/projected/e4f55ea7-928c-4334-82da-f530b8a0d8a8-kube-api-access-qvv7q\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.874253 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:21 crc kubenswrapper[4991]: I0929 10:12:21.874265 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4f55ea7-928c-4334-82da-f530b8a0d8a8-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.158890 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" event={"ID":"e4f55ea7-928c-4334-82da-f530b8a0d8a8","Type":"ContainerDied","Data":"51b788ae009dd0d66e16e5eb5b4f8af42dc7f961014984455ee1fd8cce36893f"} Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.158933 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b788ae009dd0d66e16e5eb5b4f8af42dc7f961014984455ee1fd8cce36893f" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.158974 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.252933 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p"] Sep 29 10:12:22 crc kubenswrapper[4991]: E0929 10:12:22.253661 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f55ea7-928c-4334-82da-f530b8a0d8a8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.253682 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f55ea7-928c-4334-82da-f530b8a0d8a8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.254001 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f55ea7-928c-4334-82da-f530b8a0d8a8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.266800 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.270536 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.270786 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.271660 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.272127 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.281900 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p"] Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.322452 4991 scope.go:117] "RemoveContainer" containerID="7421f5d620a108e68dfa52c47f163365876ea03c8fe5c06ef4c5194014d62475" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.361365 4991 scope.go:117] "RemoveContainer" containerID="75086cb08f12ef1d971e3d30cb2d54c8bf8cc2eb283fedd47e1b88a84c83f049" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.384723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.385260 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8sq\" (UniqueName: \"kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.385365 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.408418 4991 scope.go:117] "RemoveContainer" containerID="8bb4752652670b2b4f9c97254ce57df632c8be9ce3fe3f02a19157684cb98f23" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.441715 4991 scope.go:117] "RemoveContainer" containerID="84d8dee0a2b95a074eed7e0a8bc40cca1ec42c37bf8aab8e4623c090b1a03b9d" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.487510 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8sq\" (UniqueName: \"kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.487608 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.487669 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.493389 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.494625 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.504717 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8sq\" (UniqueName: \"kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-72d6p\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:22 crc kubenswrapper[4991]: I0929 10:12:22.597112 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:12:23 crc kubenswrapper[4991]: I0929 10:12:23.149243 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p"] Sep 29 10:12:23 crc kubenswrapper[4991]: W0929 10:12:23.152582 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e6b0f0_8b72_4494_97cf_307a33dc67ca.slice/crio-78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b WatchSource:0}: Error finding container 78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b: Status 404 returned error can't find the container with id 78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b Sep 29 10:12:23 crc kubenswrapper[4991]: I0929 10:12:23.171520 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" event={"ID":"c6e6b0f0-8b72-4494-97cf-307a33dc67ca","Type":"ContainerStarted","Data":"78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b"} Sep 29 10:12:24 crc kubenswrapper[4991]: I0929 10:12:24.182547 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" event={"ID":"c6e6b0f0-8b72-4494-97cf-307a33dc67ca","Type":"ContainerStarted","Data":"c71b94a79b9facea0699baf5cc1e147e1689618aa54d95b48eb9230e14ecc1e5"} Sep 29 10:12:24 crc kubenswrapper[4991]: I0929 10:12:24.201328 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" podStartSLOduration=1.600988737 podStartE2EDuration="2.201304349s" podCreationTimestamp="2025-09-29 10:12:22 +0000 UTC" firstStartedPulling="2025-09-29 10:12:23.155560559 +0000 UTC m=+2079.011488587" lastFinishedPulling="2025-09-29 10:12:23.755876171 +0000 UTC m=+2079.611804199" observedRunningTime="2025-09-29 10:12:24.194828699 +0000 UTC m=+2080.050756757" watchObservedRunningTime="2025-09-29 10:12:24.201304349 +0000 UTC m=+2080.057232377" Sep 29 10:12:24 crc kubenswrapper[4991]: I0929 10:12:24.723644 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:24 crc kubenswrapper[4991]: I0929 10:12:24.725156 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:24 crc kubenswrapper[4991]: I0929 10:12:24.787557 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:25 crc kubenswrapper[4991]: I0929 10:12:25.253124 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:25 crc kubenswrapper[4991]: I0929 10:12:25.310745 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.212454 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvnsg" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="registry-server" containerID="cri-o://b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db" gracePeriod=2 Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.675989 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.811176 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content\") pod \"b6e70cc7-9906-4662-b3b0-08cab9635b58\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.811426 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj46m\" (UniqueName: \"kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m\") pod \"b6e70cc7-9906-4662-b3b0-08cab9635b58\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.811629 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities\") pod \"b6e70cc7-9906-4662-b3b0-08cab9635b58\" (UID: \"b6e70cc7-9906-4662-b3b0-08cab9635b58\") " Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.813126 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities" (OuterVolumeSpecName: "utilities") pod "b6e70cc7-9906-4662-b3b0-08cab9635b58" (UID: "b6e70cc7-9906-4662-b3b0-08cab9635b58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.819459 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m" (OuterVolumeSpecName: "kube-api-access-kj46m") pod "b6e70cc7-9906-4662-b3b0-08cab9635b58" (UID: "b6e70cc7-9906-4662-b3b0-08cab9635b58"). InnerVolumeSpecName "kube-api-access-kj46m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.863253 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6e70cc7-9906-4662-b3b0-08cab9635b58" (UID: "b6e70cc7-9906-4662-b3b0-08cab9635b58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.915186 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj46m\" (UniqueName: \"kubernetes.io/projected/b6e70cc7-9906-4662-b3b0-08cab9635b58-kube-api-access-kj46m\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.915232 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:27 crc kubenswrapper[4991]: I0929 10:12:27.915246 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6e70cc7-9906-4662-b3b0-08cab9635b58-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.224423 4991 generic.go:334] "Generic (PLEG): container finished" podID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerID="b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db" exitCode=0 Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.224478 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerDied","Data":"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db"} Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.225365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvnsg" event={"ID":"b6e70cc7-9906-4662-b3b0-08cab9635b58","Type":"ContainerDied","Data":"256ad23decb8281f07973a11d0adecb92e1a6d5e6d4684eedef6ded4dc06459f"} Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.225389 4991 scope.go:117] "RemoveContainer" containerID="b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.224511 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvnsg" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.251730 4991 scope.go:117] "RemoveContainer" containerID="e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.264409 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.275708 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvnsg"] Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.281314 4991 scope.go:117] "RemoveContainer" containerID="8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.336657 4991 scope.go:117] "RemoveContainer" containerID="b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db" Sep 29 10:12:28 crc kubenswrapper[4991]: E0929 10:12:28.337189 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db\": container with ID starting with b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db not found: ID does not exist" containerID="b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.337232 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db"} err="failed to get container status \"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db\": rpc error: code = NotFound desc = could not find container \"b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db\": container with ID starting with b03e50f1c4c92f32a463b82e67fcda43479f9e41a5a952855994ca68794019db not found: ID does not exist" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.337260 4991 scope.go:117] "RemoveContainer" containerID="e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0" Sep 29 10:12:28 crc kubenswrapper[4991]: E0929 10:12:28.337730 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0\": container with ID starting with e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0 not found: ID does not exist" containerID="e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.337792 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0"} err="failed to get container status \"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0\": rpc error: code = NotFound desc = could not find container \"e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0\": container with ID starting with e2e2b66c27bb9a53ed17654f2f27e327c4287b25245eb557f425c91f71f135d0 not found: ID does not exist" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.337827 4991 scope.go:117] "RemoveContainer" containerID="8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40" Sep 29 10:12:28 crc kubenswrapper[4991]: E0929 10:12:28.338257 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40\": container with ID starting with 8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40 not found: ID does not exist" containerID="8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.338289 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40"} err="failed to get container status \"8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40\": rpc error: code = NotFound desc = could not find container \"8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40\": container with ID starting with 8050a15101fb047d1f490cf3a35460ec70c209acaff2751d29489b93bc4aea40 not found: ID does not exist" Sep 29 10:12:28 crc kubenswrapper[4991]: I0929 10:12:28.943336 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" path="/var/lib/kubelet/pods/b6e70cc7-9906-4662-b3b0-08cab9635b58/volumes" Sep 29 10:12:33 crc kubenswrapper[4991]: I0929 10:12:33.045790 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8hjxg"] Sep 29 10:12:33 crc kubenswrapper[4991]: I0929 10:12:33.058510 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8hjxg"] Sep 29 10:12:34 crc kubenswrapper[4991]: I0929 10:12:34.942458 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175b02db-b4a0-495b-8b53-522794feaae1" path="/var/lib/kubelet/pods/175b02db-b4a0-495b-8b53-522794feaae1/volumes" Sep 29 10:13:01 crc kubenswrapper[4991]: I0929 10:13:01.615003 4991 generic.go:334] "Generic (PLEG): container finished" podID="c6e6b0f0-8b72-4494-97cf-307a33dc67ca" containerID="c71b94a79b9facea0699baf5cc1e147e1689618aa54d95b48eb9230e14ecc1e5" exitCode=0 Sep 29 10:13:01 crc kubenswrapper[4991]: I0929 10:13:01.615112 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" event={"ID":"c6e6b0f0-8b72-4494-97cf-307a33dc67ca","Type":"ContainerDied","Data":"c71b94a79b9facea0699baf5cc1e147e1689618aa54d95b48eb9230e14ecc1e5"} Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.288755 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.438774 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory\") pod \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.439231 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn8sq\" (UniqueName: \"kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq\") pod \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.439560 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key\") pod \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\" (UID: \"c6e6b0f0-8b72-4494-97cf-307a33dc67ca\") " Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.446962 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq" (OuterVolumeSpecName: "kube-api-access-fn8sq") pod "c6e6b0f0-8b72-4494-97cf-307a33dc67ca" (UID: "c6e6b0f0-8b72-4494-97cf-307a33dc67ca"). InnerVolumeSpecName "kube-api-access-fn8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.476279 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory" (OuterVolumeSpecName: "inventory") pod "c6e6b0f0-8b72-4494-97cf-307a33dc67ca" (UID: "c6e6b0f0-8b72-4494-97cf-307a33dc67ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.483885 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c6e6b0f0-8b72-4494-97cf-307a33dc67ca" (UID: "c6e6b0f0-8b72-4494-97cf-307a33dc67ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.542506 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.542541 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.542553 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn8sq\" (UniqueName: \"kubernetes.io/projected/c6e6b0f0-8b72-4494-97cf-307a33dc67ca-kube-api-access-fn8sq\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.637068 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" event={"ID":"c6e6b0f0-8b72-4494-97cf-307a33dc67ca","Type":"ContainerDied","Data":"78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b"} Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.637133 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f523fbc63994433a763219e59b85e280f0b952ed234a460e6691b95a78681b" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.637135 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-72d6p" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.815264 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp"] Sep 29 10:13:03 crc kubenswrapper[4991]: E0929 10:13:03.816205 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="extract-utilities" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816232 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="extract-utilities" Sep 29 10:13:03 crc kubenswrapper[4991]: E0929 10:13:03.816297 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="extract-content" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816309 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="extract-content" Sep 29 10:13:03 crc kubenswrapper[4991]: E0929 10:13:03.816404 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e6b0f0-8b72-4494-97cf-307a33dc67ca" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816419 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e6b0f0-8b72-4494-97cf-307a33dc67ca" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:13:03 crc kubenswrapper[4991]: E0929 10:13:03.816498 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="registry-server" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816508 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="registry-server" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816895 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e6b0f0-8b72-4494-97cf-307a33dc67ca" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.816930 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e70cc7-9906-4662-b3b0-08cab9635b58" containerName="registry-server" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.817979 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.821304 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.822562 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.822825 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.823246 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.835275 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp"] Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.952746 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.952884 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcf8\" (UniqueName: \"kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:03 crc kubenswrapper[4991]: I0929 10:13:03.952980 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.056677 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.056818 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcf8\" (UniqueName: \"kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.056886 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.061482 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.062607 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.078935 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcf8\" (UniqueName: \"kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.136364 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:04 crc kubenswrapper[4991]: I0929 10:13:04.796409 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp"] Sep 29 10:13:05 crc kubenswrapper[4991]: I0929 10:13:05.666089 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" event={"ID":"09d897dc-cbe0-442f-8a74-b557e900c2c9","Type":"ContainerStarted","Data":"27d29c047629e6deee9a63d9ef3988067d6fdb5a7b6a5459b9a87422493f1085"} Sep 29 10:13:06 crc kubenswrapper[4991]: I0929 10:13:06.677725 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" event={"ID":"09d897dc-cbe0-442f-8a74-b557e900c2c9","Type":"ContainerStarted","Data":"6b6710c8ad035e2f7c28c2a0baa6f2a4c3363f744ff1e643efd8626acbeb3cfb"} Sep 29 10:13:06 crc kubenswrapper[4991]: I0929 10:13:06.707172 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" podStartSLOduration=2.963380683 podStartE2EDuration="3.707145669s" podCreationTimestamp="2025-09-29 10:13:03 +0000 UTC" firstStartedPulling="2025-09-29 10:13:04.797750209 +0000 UTC m=+2120.653678247" lastFinishedPulling="2025-09-29 10:13:05.541515205 +0000 UTC m=+2121.397443233" observedRunningTime="2025-09-29 10:13:06.696342916 +0000 UTC m=+2122.552270964" watchObservedRunningTime="2025-09-29 10:13:06.707145669 +0000 UTC m=+2122.563073697" Sep 29 10:13:07 crc kubenswrapper[4991]: I0929 10:13:07.946790 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:13:07 crc kubenswrapper[4991]: I0929 10:13:07.946858 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:13:22 crc kubenswrapper[4991]: I0929 10:13:22.576315 4991 scope.go:117] "RemoveContainer" containerID="0a4cc460d53436743fe1d8594100dc04886e3fad010c5626c4da7862b4471a9d" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.395308 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.399295 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.409878 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.409985 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6t9\" (UniqueName: \"kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.410054 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.418467 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.512644 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.512835 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.512875 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6t9\" (UniqueName: \"kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.513620 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.513873 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.532488 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6t9\" (UniqueName: \"kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9\") pod \"certified-operators-thqzq\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:36 crc kubenswrapper[4991]: I0929 10:13:36.726340 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:37 crc kubenswrapper[4991]: I0929 10:13:37.303097 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:37 crc kubenswrapper[4991]: I0929 10:13:37.947313 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:13:37 crc kubenswrapper[4991]: I0929 10:13:37.947381 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:13:38 crc kubenswrapper[4991]: I0929 10:13:38.074728 4991 generic.go:334] "Generic (PLEG): container finished" podID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerID="514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11" exitCode=0 Sep 29 10:13:38 crc kubenswrapper[4991]: I0929 10:13:38.074788 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerDied","Data":"514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11"} Sep 29 10:13:38 crc kubenswrapper[4991]: I0929 10:13:38.074815 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerStarted","Data":"b683bd27be8cfa61d37c442f2b9cfe303adaa206638018b75933025349289653"} Sep 29 10:13:38 crc kubenswrapper[4991]: I0929 10:13:38.079732 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:13:40 crc kubenswrapper[4991]: I0929 10:13:40.096570 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerStarted","Data":"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4"} Sep 29 10:13:41 crc kubenswrapper[4991]: I0929 10:13:41.110199 4991 generic.go:334] "Generic (PLEG): container finished" podID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerID="19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4" exitCode=0 Sep 29 10:13:41 crc kubenswrapper[4991]: I0929 10:13:41.110274 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerDied","Data":"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4"} Sep 29 10:13:42 crc kubenswrapper[4991]: I0929 10:13:42.134813 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerStarted","Data":"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791"} Sep 29 10:13:42 crc kubenswrapper[4991]: I0929 10:13:42.163272 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-thqzq" podStartSLOduration=2.7085015180000003 podStartE2EDuration="6.163250079s" podCreationTimestamp="2025-09-29 10:13:36 +0000 UTC" firstStartedPulling="2025-09-29 10:13:38.079420198 +0000 UTC m=+2153.935348226" lastFinishedPulling="2025-09-29 10:13:41.534168759 +0000 UTC m=+2157.390096787" observedRunningTime="2025-09-29 10:13:42.150762902 +0000 UTC m=+2158.006690920" watchObservedRunningTime="2025-09-29 10:13:42.163250079 +0000 UTC m=+2158.019178107" Sep 29 10:13:46 crc kubenswrapper[4991]: I0929 10:13:46.727140 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:46 crc kubenswrapper[4991]: I0929 10:13:46.727620 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:46 crc kubenswrapper[4991]: I0929 10:13:46.791193 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:47 crc kubenswrapper[4991]: I0929 10:13:47.234010 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:47 crc kubenswrapper[4991]: I0929 10:13:47.289316 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:49 crc kubenswrapper[4991]: I0929 10:13:49.200425 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-thqzq" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="registry-server" containerID="cri-o://e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791" gracePeriod=2 Sep 29 10:13:49 crc kubenswrapper[4991]: I0929 10:13:49.874564 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.034814 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities\") pod \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.035361 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content\") pod \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.036308 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities" (OuterVolumeSpecName: "utilities") pod "1f27f204-95a4-4a2f-8c95-6dc84aedb937" (UID: "1f27f204-95a4-4a2f-8c95-6dc84aedb937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.043606 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c6t9\" (UniqueName: \"kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9\") pod \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\" (UID: \"1f27f204-95a4-4a2f-8c95-6dc84aedb937\") " Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.044977 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.052785 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9" (OuterVolumeSpecName: "kube-api-access-2c6t9") pod "1f27f204-95a4-4a2f-8c95-6dc84aedb937" (UID: "1f27f204-95a4-4a2f-8c95-6dc84aedb937"). InnerVolumeSpecName "kube-api-access-2c6t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.084249 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f27f204-95a4-4a2f-8c95-6dc84aedb937" (UID: "1f27f204-95a4-4a2f-8c95-6dc84aedb937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.147249 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f27f204-95a4-4a2f-8c95-6dc84aedb937-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.147426 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c6t9\" (UniqueName: \"kubernetes.io/projected/1f27f204-95a4-4a2f-8c95-6dc84aedb937-kube-api-access-2c6t9\") on node \"crc\" DevicePath \"\"" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.217802 4991 generic.go:334] "Generic (PLEG): container finished" podID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerID="e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791" exitCode=0 Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.217852 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerDied","Data":"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791"} Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.217881 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-thqzq" event={"ID":"1f27f204-95a4-4a2f-8c95-6dc84aedb937","Type":"ContainerDied","Data":"b683bd27be8cfa61d37c442f2b9cfe303adaa206638018b75933025349289653"} Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.217903 4991 scope.go:117] "RemoveContainer" containerID="e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.217859 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-thqzq" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.280054 4991 scope.go:117] "RemoveContainer" containerID="19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.281454 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.295515 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-thqzq"] Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.311223 4991 scope.go:117] "RemoveContainer" containerID="514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.360362 4991 scope.go:117] "RemoveContainer" containerID="e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791" Sep 29 10:13:50 crc kubenswrapper[4991]: E0929 10:13:50.364326 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791\": container with ID starting with e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791 not found: ID does not exist" containerID="e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.364375 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791"} err="failed to get container status \"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791\": rpc error: code = NotFound desc = could not find container \"e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791\": container with ID starting with e22ce6bab5f936405ca84e751c5635055d963db223c3115ae17032ecf8e5a791 not found: ID does not exist" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.364401 4991 scope.go:117] "RemoveContainer" containerID="19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4" Sep 29 10:13:50 crc kubenswrapper[4991]: E0929 10:13:50.365257 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4\": container with ID starting with 19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4 not found: ID does not exist" containerID="19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.365335 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4"} err="failed to get container status \"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4\": rpc error: code = NotFound desc = could not find container \"19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4\": container with ID starting with 19716cc2270d38b15be46cb864c98226cb2ae4b1870cebf8fe2f4bf2f6a716e4 not found: ID does not exist" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.365394 4991 scope.go:117] "RemoveContainer" containerID="514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11" Sep 29 10:13:50 crc kubenswrapper[4991]: E0929 10:13:50.365902 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11\": container with ID starting with 514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11 not found: ID does not exist" containerID="514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.365961 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11"} err="failed to get container status \"514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11\": rpc error: code = NotFound desc = could not find container \"514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11\": container with ID starting with 514ed95f50ee742ed9c4462f655a4cac38a12c45b9b78488f7ad1cb3694dab11 not found: ID does not exist" Sep 29 10:13:50 crc kubenswrapper[4991]: I0929 10:13:50.944679 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" path="/var/lib/kubelet/pods/1f27f204-95a4-4a2f-8c95-6dc84aedb937/volumes" Sep 29 10:13:58 crc kubenswrapper[4991]: I0929 10:13:58.319663 4991 generic.go:334] "Generic (PLEG): container finished" podID="09d897dc-cbe0-442f-8a74-b557e900c2c9" containerID="6b6710c8ad035e2f7c28c2a0baa6f2a4c3363f744ff1e643efd8626acbeb3cfb" exitCode=0 Sep 29 10:13:58 crc kubenswrapper[4991]: I0929 10:13:58.320323 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" event={"ID":"09d897dc-cbe0-442f-8a74-b557e900c2c9","Type":"ContainerDied","Data":"6b6710c8ad035e2f7c28c2a0baa6f2a4c3363f744ff1e643efd8626acbeb3cfb"} Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.815475 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.900039 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory\") pod \"09d897dc-cbe0-442f-8a74-b557e900c2c9\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.900201 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcf8\" (UniqueName: \"kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8\") pod \"09d897dc-cbe0-442f-8a74-b557e900c2c9\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.900331 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key\") pod \"09d897dc-cbe0-442f-8a74-b557e900c2c9\" (UID: \"09d897dc-cbe0-442f-8a74-b557e900c2c9\") " Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.906121 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8" (OuterVolumeSpecName: "kube-api-access-9xcf8") pod "09d897dc-cbe0-442f-8a74-b557e900c2c9" (UID: "09d897dc-cbe0-442f-8a74-b557e900c2c9"). InnerVolumeSpecName "kube-api-access-9xcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.937354 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09d897dc-cbe0-442f-8a74-b557e900c2c9" (UID: "09d897dc-cbe0-442f-8a74-b557e900c2c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:13:59 crc kubenswrapper[4991]: I0929 10:13:59.938038 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory" (OuterVolumeSpecName: "inventory") pod "09d897dc-cbe0-442f-8a74-b557e900c2c9" (UID: "09d897dc-cbe0-442f-8a74-b557e900c2c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.004205 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.004244 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xcf8\" (UniqueName: \"kubernetes.io/projected/09d897dc-cbe0-442f-8a74-b557e900c2c9-kube-api-access-9xcf8\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.004255 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09d897dc-cbe0-442f-8a74-b557e900c2c9-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.342370 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" event={"ID":"09d897dc-cbe0-442f-8a74-b557e900c2c9","Type":"ContainerDied","Data":"27d29c047629e6deee9a63d9ef3988067d6fdb5a7b6a5459b9a87422493f1085"} Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.342747 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d29c047629e6deee9a63d9ef3988067d6fdb5a7b6a5459b9a87422493f1085" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.342445 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.443809 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vp6db"] Sep 29 10:14:00 crc kubenswrapper[4991]: E0929 10:14:00.444289 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="extract-utilities" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444308 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="extract-utilities" Sep 29 10:14:00 crc kubenswrapper[4991]: E0929 10:14:00.444326 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d897dc-cbe0-442f-8a74-b557e900c2c9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444334 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d897dc-cbe0-442f-8a74-b557e900c2c9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:00 crc kubenswrapper[4991]: E0929 10:14:00.444363 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="registry-server" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444368 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="registry-server" Sep 29 10:14:00 crc kubenswrapper[4991]: E0929 10:14:00.444386 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="extract-content" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444392 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="extract-content" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444617 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f27f204-95a4-4a2f-8c95-6dc84aedb937" containerName="registry-server" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.444642 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d897dc-cbe0-442f-8a74-b557e900c2c9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.445430 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.448518 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.448596 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.448832 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.448842 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.454155 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vp6db"] Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.519996 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5hh\" (UniqueName: \"kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.520180 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.520228 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.622791 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt5hh\" (UniqueName: \"kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.623030 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.623089 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.628638 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.628884 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.639715 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt5hh\" (UniqueName: \"kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh\") pod \"ssh-known-hosts-edpm-deployment-vp6db\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:00 crc kubenswrapper[4991]: I0929 10:14:00.806741 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:01 crc kubenswrapper[4991]: I0929 10:14:01.362652 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vp6db"] Sep 29 10:14:02 crc kubenswrapper[4991]: I0929 10:14:02.361102 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" event={"ID":"c9f586c6-bbb9-4d95-96e9-e284bf421bc5","Type":"ContainerStarted","Data":"a22b2fb39740880f246ffced0094a53e7f081ad5abbfcce7ed4630d5f805393a"} Sep 29 10:14:02 crc kubenswrapper[4991]: I0929 10:14:02.361629 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" event={"ID":"c9f586c6-bbb9-4d95-96e9-e284bf421bc5","Type":"ContainerStarted","Data":"a6d371e1ba8e2c5db4a31e6adcee740bf93f7951a369c5ca86ed6d00e5bc3641"} Sep 29 10:14:02 crc kubenswrapper[4991]: I0929 10:14:02.383055 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" podStartSLOduration=1.8163649899999998 podStartE2EDuration="2.383032717s" podCreationTimestamp="2025-09-29 10:14:00 +0000 UTC" firstStartedPulling="2025-09-29 10:14:01.374712703 +0000 UTC m=+2177.230640721" lastFinishedPulling="2025-09-29 10:14:01.9413804 +0000 UTC m=+2177.797308448" observedRunningTime="2025-09-29 10:14:02.373134258 +0000 UTC m=+2178.229062286" watchObservedRunningTime="2025-09-29 10:14:02.383032717 +0000 UTC m=+2178.238960755" Sep 29 10:14:07 crc kubenswrapper[4991]: I0929 10:14:07.947420 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:14:07 crc kubenswrapper[4991]: I0929 10:14:07.948025 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:14:07 crc kubenswrapper[4991]: I0929 10:14:07.948070 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:14:07 crc kubenswrapper[4991]: I0929 10:14:07.949057 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:14:07 crc kubenswrapper[4991]: I0929 10:14:07.949127 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930" gracePeriod=600 Sep 29 10:14:08 crc kubenswrapper[4991]: I0929 10:14:08.422124 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930" exitCode=0 Sep 29 10:14:08 crc kubenswrapper[4991]: I0929 10:14:08.422754 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930"} Sep 29 10:14:08 crc kubenswrapper[4991]: I0929 10:14:08.422786 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188"} Sep 29 10:14:08 crc kubenswrapper[4991]: I0929 10:14:08.422804 4991 scope.go:117] "RemoveContainer" containerID="06feea440549c0df3184b5b69836ec6a03e105964d6fcaf7a40edf8230078357" Sep 29 10:14:09 crc kubenswrapper[4991]: I0929 10:14:09.442243 4991 generic.go:334] "Generic (PLEG): container finished" podID="c9f586c6-bbb9-4d95-96e9-e284bf421bc5" containerID="a22b2fb39740880f246ffced0094a53e7f081ad5abbfcce7ed4630d5f805393a" exitCode=0 Sep 29 10:14:09 crc kubenswrapper[4991]: I0929 10:14:09.442330 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" event={"ID":"c9f586c6-bbb9-4d95-96e9-e284bf421bc5","Type":"ContainerDied","Data":"a22b2fb39740880f246ffced0094a53e7f081ad5abbfcce7ed4630d5f805393a"} Sep 29 10:14:10 crc kubenswrapper[4991]: I0929 10:14:10.978025 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.108650 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0\") pod \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.109260 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5hh\" (UniqueName: \"kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh\") pod \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.109759 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam\") pod \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\" (UID: \"c9f586c6-bbb9-4d95-96e9-e284bf421bc5\") " Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.115359 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh" (OuterVolumeSpecName: "kube-api-access-vt5hh") pod "c9f586c6-bbb9-4d95-96e9-e284bf421bc5" (UID: "c9f586c6-bbb9-4d95-96e9-e284bf421bc5"). InnerVolumeSpecName "kube-api-access-vt5hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.116074 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5hh\" (UniqueName: \"kubernetes.io/projected/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-kube-api-access-vt5hh\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.149726 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c9f586c6-bbb9-4d95-96e9-e284bf421bc5" (UID: "c9f586c6-bbb9-4d95-96e9-e284bf421bc5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.150237 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9f586c6-bbb9-4d95-96e9-e284bf421bc5" (UID: "c9f586c6-bbb9-4d95-96e9-e284bf421bc5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.218311 4991 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-inventory-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.218352 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9f586c6-bbb9-4d95-96e9-e284bf421bc5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.463252 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" event={"ID":"c9f586c6-bbb9-4d95-96e9-e284bf421bc5","Type":"ContainerDied","Data":"a6d371e1ba8e2c5db4a31e6adcee740bf93f7951a369c5ca86ed6d00e5bc3641"} Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.463309 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d371e1ba8e2c5db4a31e6adcee740bf93f7951a369c5ca86ed6d00e5bc3641" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.463374 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vp6db" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.538969 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz"] Sep 29 10:14:11 crc kubenswrapper[4991]: E0929 10:14:11.539547 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f586c6-bbb9-4d95-96e9-e284bf421bc5" containerName="ssh-known-hosts-edpm-deployment" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.539571 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f586c6-bbb9-4d95-96e9-e284bf421bc5" containerName="ssh-known-hosts-edpm-deployment" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.539863 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f586c6-bbb9-4d95-96e9-e284bf421bc5" containerName="ssh-known-hosts-edpm-deployment" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.540979 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.543129 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.545520 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.545872 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.546412 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.554927 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz"] Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.730247 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.730611 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.730645 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcsn\" (UniqueName: \"kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.833790 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.834075 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.834113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggcsn\" (UniqueName: \"kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.838551 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.839920 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.855666 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggcsn\" (UniqueName: \"kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zl8lz\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:11 crc kubenswrapper[4991]: I0929 10:14:11.859429 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:12 crc kubenswrapper[4991]: W0929 10:14:12.424205 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67b7b9fe_c0f0_473c_b67d_dcfbea22c02a.slice/crio-6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2 WatchSource:0}: Error finding container 6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2: Status 404 returned error can't find the container with id 6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2 Sep 29 10:14:12 crc kubenswrapper[4991]: I0929 10:14:12.425450 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz"] Sep 29 10:14:12 crc kubenswrapper[4991]: I0929 10:14:12.479918 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" event={"ID":"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a","Type":"ContainerStarted","Data":"6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2"} Sep 29 10:14:13 crc kubenswrapper[4991]: I0929 10:14:13.495301 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" event={"ID":"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a","Type":"ContainerStarted","Data":"b40e63d936ec38abc535fd351681b367ff39a58baced1c83007ee358b06c799d"} Sep 29 10:14:13 crc kubenswrapper[4991]: I0929 10:14:13.513855 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" podStartSLOduration=1.994565208 podStartE2EDuration="2.513837806s" podCreationTimestamp="2025-09-29 10:14:11 +0000 UTC" firstStartedPulling="2025-09-29 10:14:12.427613302 +0000 UTC m=+2188.283541330" lastFinishedPulling="2025-09-29 10:14:12.94688591 +0000 UTC m=+2188.802813928" observedRunningTime="2025-09-29 10:14:13.510326714 +0000 UTC m=+2189.366254732" watchObservedRunningTime="2025-09-29 10:14:13.513837806 +0000 UTC m=+2189.369765834" Sep 29 10:14:21 crc kubenswrapper[4991]: I0929 10:14:21.580524 4991 generic.go:334] "Generic (PLEG): container finished" podID="67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" containerID="b40e63d936ec38abc535fd351681b367ff39a58baced1c83007ee358b06c799d" exitCode=0 Sep 29 10:14:21 crc kubenswrapper[4991]: I0929 10:14:21.580642 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" event={"ID":"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a","Type":"ContainerDied","Data":"b40e63d936ec38abc535fd351681b367ff39a58baced1c83007ee358b06c799d"} Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.189658 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.338542 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcsn\" (UniqueName: \"kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn\") pod \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.338705 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key\") pod \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.339023 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory\") pod \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\" (UID: \"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a\") " Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.353319 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn" (OuterVolumeSpecName: "kube-api-access-ggcsn") pod "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" (UID: "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a"). InnerVolumeSpecName "kube-api-access-ggcsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.372798 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" (UID: "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.386406 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory" (OuterVolumeSpecName: "inventory") pod "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" (UID: "67b7b9fe-c0f0-473c-b67d-dcfbea22c02a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.442026 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.442061 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggcsn\" (UniqueName: \"kubernetes.io/projected/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-kube-api-access-ggcsn\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.442072 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67b7b9fe-c0f0-473c-b67d-dcfbea22c02a-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.620586 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" event={"ID":"67b7b9fe-c0f0-473c-b67d-dcfbea22c02a","Type":"ContainerDied","Data":"6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2"} Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.621210 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6647b59499442ef2ec9165b523f071a23774138b1f4eab9a94b412c31c2901a2" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.621099 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zl8lz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.684235 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz"] Sep 29 10:14:23 crc kubenswrapper[4991]: E0929 10:14:23.684826 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.684850 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.685134 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="67b7b9fe-c0f0-473c-b67d-dcfbea22c02a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.686140 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.697419 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz"] Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.697453 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.697654 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.697793 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.698089 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.854146 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.854286 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.854466 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6cts\" (UniqueName: \"kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.956229 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.956396 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6cts\" (UniqueName: \"kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.956472 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.961351 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.962703 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:23 crc kubenswrapper[4991]: I0929 10:14:23.973537 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6cts\" (UniqueName: \"kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:24 crc kubenswrapper[4991]: I0929 10:14:24.013669 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:24 crc kubenswrapper[4991]: I0929 10:14:24.621983 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz"] Sep 29 10:14:25 crc kubenswrapper[4991]: I0929 10:14:25.639676 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" event={"ID":"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d","Type":"ContainerStarted","Data":"6649c940c2a8e4afbab590838ec79903a9f273b3847fcb12d04694197b332748"} Sep 29 10:14:25 crc kubenswrapper[4991]: I0929 10:14:25.640241 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" event={"ID":"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d","Type":"ContainerStarted","Data":"f500cb8e8e68f90021ee3d348c287023c7c9d4001c74b1697d5a4939e587b009"} Sep 29 10:14:25 crc kubenswrapper[4991]: I0929 10:14:25.667151 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" podStartSLOduration=1.97361125 podStartE2EDuration="2.667134508s" podCreationTimestamp="2025-09-29 10:14:23 +0000 UTC" firstStartedPulling="2025-09-29 10:14:24.631065867 +0000 UTC m=+2200.486993905" lastFinishedPulling="2025-09-29 10:14:25.324589135 +0000 UTC m=+2201.180517163" observedRunningTime="2025-09-29 10:14:25.659228581 +0000 UTC m=+2201.515156629" watchObservedRunningTime="2025-09-29 10:14:25.667134508 +0000 UTC m=+2201.523062536" Sep 29 10:14:35 crc kubenswrapper[4991]: I0929 10:14:35.752371 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" containerID="6649c940c2a8e4afbab590838ec79903a9f273b3847fcb12d04694197b332748" exitCode=0 Sep 29 10:14:35 crc kubenswrapper[4991]: I0929 10:14:35.752417 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" event={"ID":"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d","Type":"ContainerDied","Data":"6649c940c2a8e4afbab590838ec79903a9f273b3847fcb12d04694197b332748"} Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.238794 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.389501 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6cts\" (UniqueName: \"kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts\") pod \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.389823 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory\") pod \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.390033 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key\") pod \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\" (UID: \"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d\") " Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.404289 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts" (OuterVolumeSpecName: "kube-api-access-m6cts") pod "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" (UID: "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d"). InnerVolumeSpecName "kube-api-access-m6cts". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.420916 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory" (OuterVolumeSpecName: "inventory") pod "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" (UID: "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.430546 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" (UID: "2f5c32cd-1b0e-49af-a986-5dbb4d56db1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.492777 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6cts\" (UniqueName: \"kubernetes.io/projected/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-kube-api-access-m6cts\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.492819 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.492831 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f5c32cd-1b0e-49af-a986-5dbb4d56db1d-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.780692 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" event={"ID":"2f5c32cd-1b0e-49af-a986-5dbb4d56db1d","Type":"ContainerDied","Data":"f500cb8e8e68f90021ee3d348c287023c7c9d4001c74b1697d5a4939e587b009"} Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.781011 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f500cb8e8e68f90021ee3d348c287023c7c9d4001c74b1697d5a4939e587b009" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.780784 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.858042 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw"] Sep 29 10:14:37 crc kubenswrapper[4991]: E0929 10:14:37.858584 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.858600 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.858844 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5c32cd-1b0e-49af-a986-5dbb4d56db1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.859618 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.866823 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867099 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867192 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867250 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867396 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867654 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867804 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867931 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.867934 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:14:37 crc kubenswrapper[4991]: I0929 10:14:37.870611 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw"] Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.005696 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.006555 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.006720 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.007286 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.008103 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.008351 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.008666 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.008763 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t279w\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.008986 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.009051 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.009150 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.010035 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.010143 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.010329 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.010497 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.010537 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.072534 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dckn7"] Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.084377 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dckn7"] Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113623 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113698 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113738 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113762 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113792 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t279w\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113852 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113883 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113936 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.113979 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114021 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114069 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114134 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114151 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114172 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114213 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.114233 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.120050 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.120249 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.120554 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.120582 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.120979 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.121054 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.124528 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.137092 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.146528 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.146640 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.147136 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.147146 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.147402 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.147642 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.147741 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.150108 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t279w\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.184084 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.725335 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw"] Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.791420 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" event={"ID":"5580bad6-d202-4da7-95d0-806deb62544c","Type":"ContainerStarted","Data":"2268688517e949f938f3d5239ae38a4ab03d2a80347de1a9fb8fd41a528446ab"} Sep 29 10:14:38 crc kubenswrapper[4991]: I0929 10:14:38.940372 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b62b4b30-b2b5-4cfa-8373-4fde7e9b2078" path="/var/lib/kubelet/pods/b62b4b30-b2b5-4cfa-8373-4fde7e9b2078/volumes" Sep 29 10:14:39 crc kubenswrapper[4991]: I0929 10:14:39.800643 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" event={"ID":"5580bad6-d202-4da7-95d0-806deb62544c","Type":"ContainerStarted","Data":"996b37085ae19ec687a8aa207ce30b68f282a04763a30ce91de1dbcd7c95eabf"} Sep 29 10:14:39 crc kubenswrapper[4991]: I0929 10:14:39.822825 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" podStartSLOduration=2.4307043090000002 podStartE2EDuration="2.822812258s" podCreationTimestamp="2025-09-29 10:14:37 +0000 UTC" firstStartedPulling="2025-09-29 10:14:38.727505869 +0000 UTC m=+2214.583433897" lastFinishedPulling="2025-09-29 10:14:39.119613818 +0000 UTC m=+2214.975541846" observedRunningTime="2025-09-29 10:14:39.81715766 +0000 UTC m=+2215.673085718" watchObservedRunningTime="2025-09-29 10:14:39.822812258 +0000 UTC m=+2215.678740286" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.154753 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87"] Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.157656 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.161550 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.161550 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.167143 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87"] Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.279524 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7k6\" (UniqueName: \"kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.279931 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.280076 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.382163 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7k6\" (UniqueName: \"kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.382553 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.382589 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.383555 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.388902 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.399777 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7k6\" (UniqueName: \"kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6\") pod \"collect-profiles-29319015-qxx87\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.481031 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:00 crc kubenswrapper[4991]: I0929 10:15:00.924851 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87"] Sep 29 10:15:00 crc kubenswrapper[4991]: W0929 10:15:00.932048 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5540731_96f4_41ac_8338_046339de8fb6.slice/crio-06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43 WatchSource:0}: Error finding container 06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43: Status 404 returned error can't find the container with id 06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43 Sep 29 10:15:01 crc kubenswrapper[4991]: I0929 10:15:01.064473 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" event={"ID":"f5540731-96f4-41ac-8338-046339de8fb6","Type":"ContainerStarted","Data":"06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43"} Sep 29 10:15:02 crc kubenswrapper[4991]: I0929 10:15:02.077237 4991 generic.go:334] "Generic (PLEG): container finished" podID="f5540731-96f4-41ac-8338-046339de8fb6" containerID="e1b5cba0995b536ad998cb52c46fe5bea21e7106da62f89a4c2a27464ea412e7" exitCode=0 Sep 29 10:15:02 crc kubenswrapper[4991]: I0929 10:15:02.077333 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" event={"ID":"f5540731-96f4-41ac-8338-046339de8fb6","Type":"ContainerDied","Data":"e1b5cba0995b536ad998cb52c46fe5bea21e7106da62f89a4c2a27464ea412e7"} Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.496488 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.561482 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume\") pod \"f5540731-96f4-41ac-8338-046339de8fb6\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.561539 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume\") pod \"f5540731-96f4-41ac-8338-046339de8fb6\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.561589 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp7k6\" (UniqueName: \"kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6\") pod \"f5540731-96f4-41ac-8338-046339de8fb6\" (UID: \"f5540731-96f4-41ac-8338-046339de8fb6\") " Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.563668 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5540731-96f4-41ac-8338-046339de8fb6" (UID: "f5540731-96f4-41ac-8338-046339de8fb6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.568462 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5540731-96f4-41ac-8338-046339de8fb6" (UID: "f5540731-96f4-41ac-8338-046339de8fb6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.569003 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6" (OuterVolumeSpecName: "kube-api-access-vp7k6") pod "f5540731-96f4-41ac-8338-046339de8fb6" (UID: "f5540731-96f4-41ac-8338-046339de8fb6"). InnerVolumeSpecName "kube-api-access-vp7k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.664059 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5540731-96f4-41ac-8338-046339de8fb6-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.664096 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5540731-96f4-41ac-8338-046339de8fb6-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:03 crc kubenswrapper[4991]: I0929 10:15:03.664106 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp7k6\" (UniqueName: \"kubernetes.io/projected/f5540731-96f4-41ac-8338-046339de8fb6-kube-api-access-vp7k6\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.098712 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" event={"ID":"f5540731-96f4-41ac-8338-046339de8fb6","Type":"ContainerDied","Data":"06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43"} Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.098756 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d41f46fa10c328bf85092d875b9572f844365881171c880d11ae982b930d43" Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.099116 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87" Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.577994 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz"] Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.590157 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318970-5b8cz"] Sep 29 10:15:04 crc kubenswrapper[4991]: I0929 10:15:04.940075 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5519a044-7a20-4a2a-ab24-7b09bfdf59bd" path="/var/lib/kubelet/pods/5519a044-7a20-4a2a-ab24-7b09bfdf59bd/volumes" Sep 29 10:15:18 crc kubenswrapper[4991]: I0929 10:15:18.033593 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jqg8d"] Sep 29 10:15:18 crc kubenswrapper[4991]: I0929 10:15:18.043852 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jqg8d"] Sep 29 10:15:18 crc kubenswrapper[4991]: I0929 10:15:18.942422 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c1c090-ed04-4867-b74f-c3daa921a4a1" path="/var/lib/kubelet/pods/71c1c090-ed04-4867-b74f-c3daa921a4a1/volumes" Sep 29 10:15:22 crc kubenswrapper[4991]: I0929 10:15:22.751759 4991 scope.go:117] "RemoveContainer" containerID="8f85b790dcf39eefb4516bebb8a8baa60dba324fc18fcb1a8dc673ea316c4c02" Sep 29 10:15:22 crc kubenswrapper[4991]: I0929 10:15:22.779715 4991 scope.go:117] "RemoveContainer" containerID="3a8c88156022bc39513a1e64f9d44312018457f9d1d9b33c976c4ea5e39a1dc4" Sep 29 10:15:22 crc kubenswrapper[4991]: I0929 10:15:22.861311 4991 scope.go:117] "RemoveContainer" containerID="2122d552c07017d8693458aec76f48ae4314f2de03f25dc91d33c2e0bbd2a2c9" Sep 29 10:15:26 crc kubenswrapper[4991]: I0929 10:15:26.324108 4991 generic.go:334] "Generic (PLEG): container finished" podID="5580bad6-d202-4da7-95d0-806deb62544c" containerID="996b37085ae19ec687a8aa207ce30b68f282a04763a30ce91de1dbcd7c95eabf" exitCode=0 Sep 29 10:15:26 crc kubenswrapper[4991]: I0929 10:15:26.324156 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" event={"ID":"5580bad6-d202-4da7-95d0-806deb62544c","Type":"ContainerDied","Data":"996b37085ae19ec687a8aa207ce30b68f282a04763a30ce91de1dbcd7c95eabf"} Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.813392 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977021 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t279w\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977069 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977112 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977197 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977777 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977827 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977848 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977870 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977898 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977928 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977977 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.977999 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.978018 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.978038 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.978109 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.978156 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5580bad6-d202-4da7-95d0-806deb62544c\" (UID: \"5580bad6-d202-4da7-95d0-806deb62544c\") " Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.983934 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.984645 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.984747 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w" (OuterVolumeSpecName: "kube-api-access-t279w") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "kube-api-access-t279w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.987155 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.987246 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.992866 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.993293 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:27 crc kubenswrapper[4991]: I0929 10:15:27.997689 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.003542 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.003691 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.006181 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.013233 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.013280 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.013318 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.020652 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.020888 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory" (OuterVolumeSpecName: "inventory") pod "5580bad6-d202-4da7-95d0-806deb62544c" (UID: "5580bad6-d202-4da7-95d0-806deb62544c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081034 4991 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081283 4991 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081294 4991 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081305 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081316 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081327 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t279w\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-kube-api-access-t279w\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081338 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081347 4991 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081355 4991 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081364 4991 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081374 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081382 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081390 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081400 4991 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081408 4991 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5580bad6-d202-4da7-95d0-806deb62544c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.081416 4991 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5580bad6-d202-4da7-95d0-806deb62544c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.346225 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" event={"ID":"5580bad6-d202-4da7-95d0-806deb62544c","Type":"ContainerDied","Data":"2268688517e949f938f3d5239ae38a4ab03d2a80347de1a9fb8fd41a528446ab"} Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.347048 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2268688517e949f938f3d5239ae38a4ab03d2a80347de1a9fb8fd41a528446ab" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.346299 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.462128 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5"] Sep 29 10:15:28 crc kubenswrapper[4991]: E0929 10:15:28.462601 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5540731-96f4-41ac-8338-046339de8fb6" containerName="collect-profiles" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.462619 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5540731-96f4-41ac-8338-046339de8fb6" containerName="collect-profiles" Sep 29 10:15:28 crc kubenswrapper[4991]: E0929 10:15:28.462651 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5580bad6-d202-4da7-95d0-806deb62544c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.462659 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5580bad6-d202-4da7-95d0-806deb62544c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.469382 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5580bad6-d202-4da7-95d0-806deb62544c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.469615 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5540731-96f4-41ac-8338-046339de8fb6" containerName="collect-profiles" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.471068 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.474053 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.474139 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.474265 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.474380 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.474474 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.486635 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5"] Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.490818 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.491233 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.491359 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgw5\" (UniqueName: \"kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.491502 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.491564 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: E0929 10:15:28.567085 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5580bad6_d202_4da7_95d0_806deb62544c.slice/crio-2268688517e949f938f3d5239ae38a4ab03d2a80347de1a9fb8fd41a528446ab\": RecentStats: unable to find data in memory cache]" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.593888 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfgw5\" (UniqueName: \"kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.593998 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.594045 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.594113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.594222 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.594898 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.599159 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.604736 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.612692 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.614516 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfgw5\" (UniqueName: \"kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vl5z5\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:28 crc kubenswrapper[4991]: I0929 10:15:28.791433 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:15:29 crc kubenswrapper[4991]: I0929 10:15:29.311136 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5"] Sep 29 10:15:29 crc kubenswrapper[4991]: I0929 10:15:29.358032 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" event={"ID":"ad453fb9-39ab-44af-99e0-4b5f416fd015","Type":"ContainerStarted","Data":"e70de53d714be3103edb0fb856c1d0ffadb81cc1e103da7324009a29e6b591aa"} Sep 29 10:15:30 crc kubenswrapper[4991]: I0929 10:15:30.373688 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" event={"ID":"ad453fb9-39ab-44af-99e0-4b5f416fd015","Type":"ContainerStarted","Data":"382f3a924eaa4e7f5ec63740b8e5914573d58f2ca357dc993eb0be2bc7c260cc"} Sep 29 10:16:35 crc kubenswrapper[4991]: I0929 10:16:35.118662 4991 generic.go:334] "Generic (PLEG): container finished" podID="ad453fb9-39ab-44af-99e0-4b5f416fd015" containerID="382f3a924eaa4e7f5ec63740b8e5914573d58f2ca357dc993eb0be2bc7c260cc" exitCode=0 Sep 29 10:16:35 crc kubenswrapper[4991]: I0929 10:16:35.118770 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" event={"ID":"ad453fb9-39ab-44af-99e0-4b5f416fd015","Type":"ContainerDied","Data":"382f3a924eaa4e7f5ec63740b8e5914573d58f2ca357dc993eb0be2bc7c260cc"} Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.709261 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.844314 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfgw5\" (UniqueName: \"kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5\") pod \"ad453fb9-39ab-44af-99e0-4b5f416fd015\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.844395 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0\") pod \"ad453fb9-39ab-44af-99e0-4b5f416fd015\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.844475 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key\") pod \"ad453fb9-39ab-44af-99e0-4b5f416fd015\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.844550 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory\") pod \"ad453fb9-39ab-44af-99e0-4b5f416fd015\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.844844 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle\") pod \"ad453fb9-39ab-44af-99e0-4b5f416fd015\" (UID: \"ad453fb9-39ab-44af-99e0-4b5f416fd015\") " Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.851829 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ad453fb9-39ab-44af-99e0-4b5f416fd015" (UID: "ad453fb9-39ab-44af-99e0-4b5f416fd015"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.852151 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5" (OuterVolumeSpecName: "kube-api-access-cfgw5") pod "ad453fb9-39ab-44af-99e0-4b5f416fd015" (UID: "ad453fb9-39ab-44af-99e0-4b5f416fd015"). InnerVolumeSpecName "kube-api-access-cfgw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.884326 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad453fb9-39ab-44af-99e0-4b5f416fd015" (UID: "ad453fb9-39ab-44af-99e0-4b5f416fd015"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.886302 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory" (OuterVolumeSpecName: "inventory") pod "ad453fb9-39ab-44af-99e0-4b5f416fd015" (UID: "ad453fb9-39ab-44af-99e0-4b5f416fd015"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.887854 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ad453fb9-39ab-44af-99e0-4b5f416fd015" (UID: "ad453fb9-39ab-44af-99e0-4b5f416fd015"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.949393 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfgw5\" (UniqueName: \"kubernetes.io/projected/ad453fb9-39ab-44af-99e0-4b5f416fd015-kube-api-access-cfgw5\") on node \"crc\" DevicePath \"\"" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.949436 4991 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.949462 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.949482 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:16:36 crc kubenswrapper[4991]: I0929 10:16:36.949500 4991 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad453fb9-39ab-44af-99e0-4b5f416fd015-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.142899 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" event={"ID":"ad453fb9-39ab-44af-99e0-4b5f416fd015","Type":"ContainerDied","Data":"e70de53d714be3103edb0fb856c1d0ffadb81cc1e103da7324009a29e6b591aa"} Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.142929 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vl5z5" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.142940 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e70de53d714be3103edb0fb856c1d0ffadb81cc1e103da7324009a29e6b591aa" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.237420 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d"] Sep 29 10:16:37 crc kubenswrapper[4991]: E0929 10:16:37.238029 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad453fb9-39ab-44af-99e0-4b5f416fd015" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.238056 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad453fb9-39ab-44af-99e0-4b5f416fd015" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.238375 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad453fb9-39ab-44af-99e0-4b5f416fd015" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.239405 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.246800 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.247333 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.247692 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.247843 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.248013 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.248211 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.254096 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d"] Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363015 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363133 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363177 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363329 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9dx\" (UniqueName: \"kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363381 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.363527 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465454 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465541 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9dx\" (UniqueName: \"kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465563 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465608 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465797 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.465849 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.469758 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.470220 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.470580 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.472390 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.472995 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.490595 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9dx\" (UniqueName: \"kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.561497 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.947039 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:16:37 crc kubenswrapper[4991]: I0929 10:16:37.947476 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:16:38 crc kubenswrapper[4991]: I0929 10:16:38.123660 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d"] Sep 29 10:16:38 crc kubenswrapper[4991]: I0929 10:16:38.153748 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" event={"ID":"f84e8053-85fb-4841-b585-ce6d6ecb8e45","Type":"ContainerStarted","Data":"05ec83e75cbe3a8b54d8f150da41bc28ea131a432f1305931435e13441ff6efe"} Sep 29 10:16:39 crc kubenswrapper[4991]: I0929 10:16:39.165899 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" event={"ID":"f84e8053-85fb-4841-b585-ce6d6ecb8e45","Type":"ContainerStarted","Data":"ef668b748360bba6e2e3c1a203abda5049366b97e1f5cc015ac20f292965b056"} Sep 29 10:16:39 crc kubenswrapper[4991]: I0929 10:16:39.194756 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" podStartSLOduration=1.644746324 podStartE2EDuration="2.194737786s" podCreationTimestamp="2025-09-29 10:16:37 +0000 UTC" firstStartedPulling="2025-09-29 10:16:38.129311726 +0000 UTC m=+2333.985239744" lastFinishedPulling="2025-09-29 10:16:38.679303178 +0000 UTC m=+2334.535231206" observedRunningTime="2025-09-29 10:16:39.188450241 +0000 UTC m=+2335.044378279" watchObservedRunningTime="2025-09-29 10:16:39.194737786 +0000 UTC m=+2335.050665814" Sep 29 10:17:07 crc kubenswrapper[4991]: I0929 10:17:07.946658 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:17:07 crc kubenswrapper[4991]: I0929 10:17:07.948198 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:17:28 crc kubenswrapper[4991]: I0929 10:17:28.700805 4991 generic.go:334] "Generic (PLEG): container finished" podID="f84e8053-85fb-4841-b585-ce6d6ecb8e45" containerID="ef668b748360bba6e2e3c1a203abda5049366b97e1f5cc015ac20f292965b056" exitCode=0 Sep 29 10:17:28 crc kubenswrapper[4991]: I0929 10:17:28.700909 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" event={"ID":"f84e8053-85fb-4841-b585-ce6d6ecb8e45","Type":"ContainerDied","Data":"ef668b748360bba6e2e3c1a203abda5049366b97e1f5cc015ac20f292965b056"} Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.215635 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.328680 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9dx\" (UniqueName: \"kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.328729 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.328839 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.328894 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.329094 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.329213 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key\") pod \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\" (UID: \"f84e8053-85fb-4841-b585-ce6d6ecb8e45\") " Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.336003 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.338177 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx" (OuterVolumeSpecName: "kube-api-access-fw9dx") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "kube-api-access-fw9dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.365437 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.366434 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.366921 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory" (OuterVolumeSpecName: "inventory") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.369678 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f84e8053-85fb-4841-b585-ce6d6ecb8e45" (UID: "f84e8053-85fb-4841-b585-ce6d6ecb8e45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432020 4991 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432066 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432079 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9dx\" (UniqueName: \"kubernetes.io/projected/f84e8053-85fb-4841-b585-ce6d6ecb8e45-kube-api-access-fw9dx\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432093 4991 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432111 4991 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.432127 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f84e8053-85fb-4841-b585-ce6d6ecb8e45-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.729101 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" event={"ID":"f84e8053-85fb-4841-b585-ce6d6ecb8e45","Type":"ContainerDied","Data":"05ec83e75cbe3a8b54d8f150da41bc28ea131a432f1305931435e13441ff6efe"} Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.729156 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ec83e75cbe3a8b54d8f150da41bc28ea131a432f1305931435e13441ff6efe" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.729182 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.833211 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz"] Sep 29 10:17:30 crc kubenswrapper[4991]: E0929 10:17:30.833737 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f84e8053-85fb-4841-b585-ce6d6ecb8e45" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.833757 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f84e8053-85fb-4841-b585-ce6d6ecb8e45" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.834083 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f84e8053-85fb-4841-b585-ce6d6ecb8e45" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.834940 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.837460 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.838299 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.838545 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.838638 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.841784 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.843416 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz"] Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.869200 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.869255 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.869368 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb6l\" (UniqueName: \"kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.869482 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.869591 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.971943 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.972451 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.972752 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.972882 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.973127 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnb6l\" (UniqueName: \"kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.975853 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.977006 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.977771 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.984504 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:30 crc kubenswrapper[4991]: I0929 10:17:30.994814 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnb6l\" (UniqueName: \"kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:31 crc kubenswrapper[4991]: I0929 10:17:31.164498 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:17:31 crc kubenswrapper[4991]: I0929 10:17:31.702503 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz"] Sep 29 10:17:31 crc kubenswrapper[4991]: I0929 10:17:31.744026 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" event={"ID":"e3d94097-7104-4678-87b7-28f90003a13f","Type":"ContainerStarted","Data":"85efe0956da622cabf62e98a4758060b648515635b9b7eba7a9ac181183526fa"} Sep 29 10:17:32 crc kubenswrapper[4991]: I0929 10:17:32.757994 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" event={"ID":"e3d94097-7104-4678-87b7-28f90003a13f","Type":"ContainerStarted","Data":"017e5488f8cc540543627d2bfec9b5c72c4df607ff4ea34420eb6cf78f3b94a7"} Sep 29 10:17:32 crc kubenswrapper[4991]: I0929 10:17:32.782544 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" podStartSLOduration=2.111047935 podStartE2EDuration="2.782524966s" podCreationTimestamp="2025-09-29 10:17:30 +0000 UTC" firstStartedPulling="2025-09-29 10:17:31.706096289 +0000 UTC m=+2387.562024317" lastFinishedPulling="2025-09-29 10:17:32.37757333 +0000 UTC m=+2388.233501348" observedRunningTime="2025-09-29 10:17:32.775112362 +0000 UTC m=+2388.631040390" watchObservedRunningTime="2025-09-29 10:17:32.782524966 +0000 UTC m=+2388.638452994" Sep 29 10:17:37 crc kubenswrapper[4991]: I0929 10:17:37.947443 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:17:37 crc kubenswrapper[4991]: I0929 10:17:37.947801 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:17:37 crc kubenswrapper[4991]: I0929 10:17:37.947864 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:17:37 crc kubenswrapper[4991]: I0929 10:17:37.948941 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:17:37 crc kubenswrapper[4991]: I0929 10:17:37.949081 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" gracePeriod=600 Sep 29 10:17:38 crc kubenswrapper[4991]: E0929 10:17:38.068535 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:17:38 crc kubenswrapper[4991]: I0929 10:17:38.834631 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" exitCode=0 Sep 29 10:17:38 crc kubenswrapper[4991]: I0929 10:17:38.834749 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188"} Sep 29 10:17:38 crc kubenswrapper[4991]: I0929 10:17:38.835076 4991 scope.go:117] "RemoveContainer" containerID="db925c9374949534c77912b1b66d2f279911692e1df038e17b4b4e917154a930" Sep 29 10:17:38 crc kubenswrapper[4991]: I0929 10:17:38.836542 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:17:38 crc kubenswrapper[4991]: E0929 10:17:38.837311 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:17:53 crc kubenswrapper[4991]: I0929 10:17:53.948942 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:17:53 crc kubenswrapper[4991]: E0929 10:17:53.950270 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:04 crc kubenswrapper[4991]: I0929 10:18:04.935505 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:18:04 crc kubenswrapper[4991]: E0929 10:18:04.936391 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:15 crc kubenswrapper[4991]: I0929 10:18:15.927224 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:18:15 crc kubenswrapper[4991]: E0929 10:18:15.928110 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:29 crc kubenswrapper[4991]: I0929 10:18:29.926685 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:18:29 crc kubenswrapper[4991]: E0929 10:18:29.927530 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:42 crc kubenswrapper[4991]: I0929 10:18:42.927298 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:18:42 crc kubenswrapper[4991]: E0929 10:18:42.928638 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.643121 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.646832 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.681237 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.699088 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.699361 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w282\" (UniqueName: \"kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.699427 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.801855 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w282\" (UniqueName: \"kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.802305 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.802387 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.803257 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.803418 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.827376 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w282\" (UniqueName: \"kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282\") pod \"redhat-marketplace-dm5lq\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:51 crc kubenswrapper[4991]: I0929 10:18:51.982808 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:18:52 crc kubenswrapper[4991]: I0929 10:18:52.470547 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:18:52 crc kubenswrapper[4991]: I0929 10:18:52.653274 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerStarted","Data":"c7585733de55d7ed80b36061534f1f7ec815c69f808269cbcc3d8a43cefdc58d"} Sep 29 10:18:53 crc kubenswrapper[4991]: I0929 10:18:53.665080 4991 generic.go:334] "Generic (PLEG): container finished" podID="daaa755d-89e1-46ba-b568-e96621c791da" containerID="fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d" exitCode=0 Sep 29 10:18:53 crc kubenswrapper[4991]: I0929 10:18:53.665135 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerDied","Data":"fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d"} Sep 29 10:18:53 crc kubenswrapper[4991]: I0929 10:18:53.667481 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:18:54 crc kubenswrapper[4991]: I0929 10:18:54.939436 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:18:54 crc kubenswrapper[4991]: E0929 10:18:54.940464 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:18:55 crc kubenswrapper[4991]: I0929 10:18:55.686845 4991 generic.go:334] "Generic (PLEG): container finished" podID="daaa755d-89e1-46ba-b568-e96621c791da" containerID="b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0" exitCode=0 Sep 29 10:18:55 crc kubenswrapper[4991]: I0929 10:18:55.686904 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerDied","Data":"b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0"} Sep 29 10:18:56 crc kubenswrapper[4991]: I0929 10:18:56.699906 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerStarted","Data":"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d"} Sep 29 10:18:56 crc kubenswrapper[4991]: I0929 10:18:56.719621 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dm5lq" podStartSLOduration=3.268906065 podStartE2EDuration="5.719599123s" podCreationTimestamp="2025-09-29 10:18:51 +0000 UTC" firstStartedPulling="2025-09-29 10:18:53.667203017 +0000 UTC m=+2469.523131045" lastFinishedPulling="2025-09-29 10:18:56.117896075 +0000 UTC m=+2471.973824103" observedRunningTime="2025-09-29 10:18:56.715960838 +0000 UTC m=+2472.571888886" watchObservedRunningTime="2025-09-29 10:18:56.719599123 +0000 UTC m=+2472.575527151" Sep 29 10:19:01 crc kubenswrapper[4991]: I0929 10:19:01.983340 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:01 crc kubenswrapper[4991]: I0929 10:19:01.983794 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:02 crc kubenswrapper[4991]: I0929 10:19:02.055138 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:02 crc kubenswrapper[4991]: I0929 10:19:02.814535 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:02 crc kubenswrapper[4991]: I0929 10:19:02.862919 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:19:04 crc kubenswrapper[4991]: I0929 10:19:04.795657 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dm5lq" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="registry-server" containerID="cri-o://22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d" gracePeriod=2 Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.331941 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.520812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content\") pod \"daaa755d-89e1-46ba-b568-e96621c791da\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.520887 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities\") pod \"daaa755d-89e1-46ba-b568-e96621c791da\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.521049 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w282\" (UniqueName: \"kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282\") pod \"daaa755d-89e1-46ba-b568-e96621c791da\" (UID: \"daaa755d-89e1-46ba-b568-e96621c791da\") " Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.521720 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities" (OuterVolumeSpecName: "utilities") pod "daaa755d-89e1-46ba-b568-e96621c791da" (UID: "daaa755d-89e1-46ba-b568-e96621c791da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.531215 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daaa755d-89e1-46ba-b568-e96621c791da" (UID: "daaa755d-89e1-46ba-b568-e96621c791da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.536344 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282" (OuterVolumeSpecName: "kube-api-access-8w282") pod "daaa755d-89e1-46ba-b568-e96621c791da" (UID: "daaa755d-89e1-46ba-b568-e96621c791da"). InnerVolumeSpecName "kube-api-access-8w282". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.624696 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w282\" (UniqueName: \"kubernetes.io/projected/daaa755d-89e1-46ba-b568-e96621c791da-kube-api-access-8w282\") on node \"crc\" DevicePath \"\"" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.624737 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.624751 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daaa755d-89e1-46ba-b568-e96621c791da-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.812763 4991 generic.go:334] "Generic (PLEG): container finished" podID="daaa755d-89e1-46ba-b568-e96621c791da" containerID="22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d" exitCode=0 Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.812818 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dm5lq" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.812834 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerDied","Data":"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d"} Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.814746 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dm5lq" event={"ID":"daaa755d-89e1-46ba-b568-e96621c791da","Type":"ContainerDied","Data":"c7585733de55d7ed80b36061534f1f7ec815c69f808269cbcc3d8a43cefdc58d"} Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.814797 4991 scope.go:117] "RemoveContainer" containerID="22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.856493 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.858920 4991 scope.go:117] "RemoveContainer" containerID="b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.867107 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dm5lq"] Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.893548 4991 scope.go:117] "RemoveContainer" containerID="fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.951410 4991 scope.go:117] "RemoveContainer" containerID="22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d" Sep 29 10:19:05 crc kubenswrapper[4991]: E0929 10:19:05.951842 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d\": container with ID starting with 22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d not found: ID does not exist" containerID="22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.951919 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d"} err="failed to get container status \"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d\": rpc error: code = NotFound desc = could not find container \"22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d\": container with ID starting with 22b4151158343e3185eb4eb0fc0d096e3319ab876796ae80591b62dab7e8391d not found: ID does not exist" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.951960 4991 scope.go:117] "RemoveContainer" containerID="b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0" Sep 29 10:19:05 crc kubenswrapper[4991]: E0929 10:19:05.952312 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0\": container with ID starting with b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0 not found: ID does not exist" containerID="b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.952341 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0"} err="failed to get container status \"b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0\": rpc error: code = NotFound desc = could not find container \"b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0\": container with ID starting with b2ad1afd13325e8212d597c68a79355c168f81e3a3b51e383161e9590522ecc0 not found: ID does not exist" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.952356 4991 scope.go:117] "RemoveContainer" containerID="fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d" Sep 29 10:19:05 crc kubenswrapper[4991]: E0929 10:19:05.952831 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d\": container with ID starting with fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d not found: ID does not exist" containerID="fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d" Sep 29 10:19:05 crc kubenswrapper[4991]: I0929 10:19:05.952855 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d"} err="failed to get container status \"fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d\": rpc error: code = NotFound desc = could not find container \"fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d\": container with ID starting with fa7e199797d0cf03b281f9e40e6215500daed20d3457fe481d5f19678d9a105d not found: ID does not exist" Sep 29 10:19:06 crc kubenswrapper[4991]: I0929 10:19:06.926389 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:19:06 crc kubenswrapper[4991]: E0929 10:19:06.927154 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:19:06 crc kubenswrapper[4991]: I0929 10:19:06.940418 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daaa755d-89e1-46ba-b568-e96621c791da" path="/var/lib/kubelet/pods/daaa755d-89e1-46ba-b568-e96621c791da/volumes" Sep 29 10:19:19 crc kubenswrapper[4991]: I0929 10:19:19.926161 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:19:19 crc kubenswrapper[4991]: E0929 10:19:19.926992 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:19:32 crc kubenswrapper[4991]: I0929 10:19:32.938664 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:19:32 crc kubenswrapper[4991]: E0929 10:19:32.939480 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:19:44 crc kubenswrapper[4991]: I0929 10:19:44.946321 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:19:44 crc kubenswrapper[4991]: E0929 10:19:44.947406 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:19:59 crc kubenswrapper[4991]: I0929 10:19:59.927522 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:19:59 crc kubenswrapper[4991]: E0929 10:19:59.928360 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:20:13 crc kubenswrapper[4991]: I0929 10:20:13.926185 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:20:13 crc kubenswrapper[4991]: E0929 10:20:13.926985 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:20:25 crc kubenswrapper[4991]: I0929 10:20:25.926275 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:20:25 crc kubenswrapper[4991]: E0929 10:20:25.927210 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:20:40 crc kubenswrapper[4991]: I0929 10:20:40.927048 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:20:40 crc kubenswrapper[4991]: E0929 10:20:40.927916 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:20:51 crc kubenswrapper[4991]: I0929 10:20:51.927316 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:20:51 crc kubenswrapper[4991]: E0929 10:20:51.928192 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:21:06 crc kubenswrapper[4991]: I0929 10:21:06.927612 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:21:06 crc kubenswrapper[4991]: E0929 10:21:06.928533 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.058543 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:17 crc kubenswrapper[4991]: E0929 10:21:17.059800 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="extract-utilities" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.059826 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="extract-utilities" Sep 29 10:21:17 crc kubenswrapper[4991]: E0929 10:21:17.059859 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="registry-server" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.059869 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="registry-server" Sep 29 10:21:17 crc kubenswrapper[4991]: E0929 10:21:17.059923 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="extract-content" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.059938 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="extract-content" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.060339 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="daaa755d-89e1-46ba-b568-e96621c791da" containerName="registry-server" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.062707 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.080110 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.166701 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.167186 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.167227 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pck\" (UniqueName: \"kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.269813 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pck\" (UniqueName: \"kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.270052 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.270222 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.270760 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.270833 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.291598 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pck\" (UniqueName: \"kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck\") pod \"redhat-operators-dghf7\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.393003 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:17 crc kubenswrapper[4991]: I0929 10:21:17.875995 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:18 crc kubenswrapper[4991]: I0929 10:21:18.333144 4991 generic.go:334] "Generic (PLEG): container finished" podID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerID="7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d" exitCode=0 Sep 29 10:21:18 crc kubenswrapper[4991]: I0929 10:21:18.333195 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerDied","Data":"7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d"} Sep 29 10:21:18 crc kubenswrapper[4991]: I0929 10:21:18.334044 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerStarted","Data":"feb6bd5bafa4db2c233c6a9128c45558430e4d2d953a7e24d0b2894232c39ba6"} Sep 29 10:21:20 crc kubenswrapper[4991]: I0929 10:21:20.357855 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerStarted","Data":"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06"} Sep 29 10:21:21 crc kubenswrapper[4991]: I0929 10:21:21.371622 4991 generic.go:334] "Generic (PLEG): container finished" podID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerID="6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06" exitCode=0 Sep 29 10:21:21 crc kubenswrapper[4991]: I0929 10:21:21.371739 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerDied","Data":"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06"} Sep 29 10:21:21 crc kubenswrapper[4991]: I0929 10:21:21.927806 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:21:21 crc kubenswrapper[4991]: E0929 10:21:21.928478 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:21:23 crc kubenswrapper[4991]: I0929 10:21:23.402275 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerStarted","Data":"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d"} Sep 29 10:21:23 crc kubenswrapper[4991]: I0929 10:21:23.425112 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dghf7" podStartSLOduration=2.381435912 podStartE2EDuration="6.425089096s" podCreationTimestamp="2025-09-29 10:21:17 +0000 UTC" firstStartedPulling="2025-09-29 10:21:18.335666385 +0000 UTC m=+2614.191594423" lastFinishedPulling="2025-09-29 10:21:22.379319579 +0000 UTC m=+2618.235247607" observedRunningTime="2025-09-29 10:21:23.42218553 +0000 UTC m=+2619.278113568" watchObservedRunningTime="2025-09-29 10:21:23.425089096 +0000 UTC m=+2619.281017124" Sep 29 10:21:27 crc kubenswrapper[4991]: I0929 10:21:27.394001 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:27 crc kubenswrapper[4991]: I0929 10:21:27.394507 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:27 crc kubenswrapper[4991]: I0929 10:21:27.450071 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:27 crc kubenswrapper[4991]: I0929 10:21:27.503532 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:27 crc kubenswrapper[4991]: I0929 10:21:27.717942 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:29 crc kubenswrapper[4991]: I0929 10:21:29.459754 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dghf7" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="registry-server" containerID="cri-o://e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d" gracePeriod=2 Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.026072 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.098516 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content\") pod \"4bb70b38-0870-44d5-97ff-67b3522e22bc\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.098638 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities\") pod \"4bb70b38-0870-44d5-97ff-67b3522e22bc\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.099987 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities" (OuterVolumeSpecName: "utilities") pod "4bb70b38-0870-44d5-97ff-67b3522e22bc" (UID: "4bb70b38-0870-44d5-97ff-67b3522e22bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.181039 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb70b38-0870-44d5-97ff-67b3522e22bc" (UID: "4bb70b38-0870-44d5-97ff-67b3522e22bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.200394 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9pck\" (UniqueName: \"kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck\") pod \"4bb70b38-0870-44d5-97ff-67b3522e22bc\" (UID: \"4bb70b38-0870-44d5-97ff-67b3522e22bc\") " Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.200906 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.200938 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70b38-0870-44d5-97ff-67b3522e22bc-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.209195 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck" (OuterVolumeSpecName: "kube-api-access-f9pck") pod "4bb70b38-0870-44d5-97ff-67b3522e22bc" (UID: "4bb70b38-0870-44d5-97ff-67b3522e22bc"). InnerVolumeSpecName "kube-api-access-f9pck". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.302191 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9pck\" (UniqueName: \"kubernetes.io/projected/4bb70b38-0870-44d5-97ff-67b3522e22bc-kube-api-access-f9pck\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.473235 4991 generic.go:334] "Generic (PLEG): container finished" podID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerID="e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d" exitCode=0 Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.473287 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerDied","Data":"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d"} Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.473316 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dghf7" event={"ID":"4bb70b38-0870-44d5-97ff-67b3522e22bc","Type":"ContainerDied","Data":"feb6bd5bafa4db2c233c6a9128c45558430e4d2d953a7e24d0b2894232c39ba6"} Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.473331 4991 scope.go:117] "RemoveContainer" containerID="e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.473369 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dghf7" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.501180 4991 scope.go:117] "RemoveContainer" containerID="6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.511845 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.527572 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dghf7"] Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.530246 4991 scope.go:117] "RemoveContainer" containerID="7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.610026 4991 scope.go:117] "RemoveContainer" containerID="e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d" Sep 29 10:21:30 crc kubenswrapper[4991]: E0929 10:21:30.610658 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d\": container with ID starting with e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d not found: ID does not exist" containerID="e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.610731 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d"} err="failed to get container status \"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d\": rpc error: code = NotFound desc = could not find container \"e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d\": container with ID starting with e02721d2a90a04095afa76fb6c54abfcb0b48a83dfa32a3d99326ce46cd4560d not found: ID does not exist" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.610779 4991 scope.go:117] "RemoveContainer" containerID="6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06" Sep 29 10:21:30 crc kubenswrapper[4991]: E0929 10:21:30.611191 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06\": container with ID starting with 6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06 not found: ID does not exist" containerID="6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.611237 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06"} err="failed to get container status \"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06\": rpc error: code = NotFound desc = could not find container \"6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06\": container with ID starting with 6de2f76f942296a13489a1b21731b620ed2a108b6d9b1284b9720d98d4141f06 not found: ID does not exist" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.611264 4991 scope.go:117] "RemoveContainer" containerID="7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d" Sep 29 10:21:30 crc kubenswrapper[4991]: E0929 10:21:30.611545 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d\": container with ID starting with 7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d not found: ID does not exist" containerID="7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.611595 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d"} err="failed to get container status \"7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d\": rpc error: code = NotFound desc = could not find container \"7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d\": container with ID starting with 7401f5915e5a350fde88f7fc410cb485a9c035bad825cb752452abf888e8e60d not found: ID does not exist" Sep 29 10:21:30 crc kubenswrapper[4991]: I0929 10:21:30.940173 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" path="/var/lib/kubelet/pods/4bb70b38-0870-44d5-97ff-67b3522e22bc/volumes" Sep 29 10:21:32 crc kubenswrapper[4991]: I0929 10:21:32.926733 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:21:32 crc kubenswrapper[4991]: E0929 10:21:32.927439 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:21:45 crc kubenswrapper[4991]: I0929 10:21:45.927721 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:21:45 crc kubenswrapper[4991]: E0929 10:21:45.928549 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:21:51 crc kubenswrapper[4991]: I0929 10:21:51.715653 4991 generic.go:334] "Generic (PLEG): container finished" podID="e3d94097-7104-4678-87b7-28f90003a13f" containerID="017e5488f8cc540543627d2bfec9b5c72c4df607ff4ea34420eb6cf78f3b94a7" exitCode=0 Sep 29 10:21:51 crc kubenswrapper[4991]: I0929 10:21:51.715735 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" event={"ID":"e3d94097-7104-4678-87b7-28f90003a13f","Type":"ContainerDied","Data":"017e5488f8cc540543627d2bfec9b5c72c4df607ff4ea34420eb6cf78f3b94a7"} Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.274225 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.280159 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory\") pod \"e3d94097-7104-4678-87b7-28f90003a13f\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.280588 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key\") pod \"e3d94097-7104-4678-87b7-28f90003a13f\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.280742 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0\") pod \"e3d94097-7104-4678-87b7-28f90003a13f\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.280851 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle\") pod \"e3d94097-7104-4678-87b7-28f90003a13f\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.281023 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnb6l\" (UniqueName: \"kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l\") pod \"e3d94097-7104-4678-87b7-28f90003a13f\" (UID: \"e3d94097-7104-4678-87b7-28f90003a13f\") " Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.292129 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e3d94097-7104-4678-87b7-28f90003a13f" (UID: "e3d94097-7104-4678-87b7-28f90003a13f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.320326 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l" (OuterVolumeSpecName: "kube-api-access-dnb6l") pod "e3d94097-7104-4678-87b7-28f90003a13f" (UID: "e3d94097-7104-4678-87b7-28f90003a13f"). InnerVolumeSpecName "kube-api-access-dnb6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.383986 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnb6l\" (UniqueName: \"kubernetes.io/projected/e3d94097-7104-4678-87b7-28f90003a13f-kube-api-access-dnb6l\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.384297 4991 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.425116 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3d94097-7104-4678-87b7-28f90003a13f" (UID: "e3d94097-7104-4678-87b7-28f90003a13f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.440117 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e3d94097-7104-4678-87b7-28f90003a13f" (UID: "e3d94097-7104-4678-87b7-28f90003a13f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.469247 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory" (OuterVolumeSpecName: "inventory") pod "e3d94097-7104-4678-87b7-28f90003a13f" (UID: "e3d94097-7104-4678-87b7-28f90003a13f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.486600 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.486634 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.486643 4991 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3d94097-7104-4678-87b7-28f90003a13f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.745450 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" event={"ID":"e3d94097-7104-4678-87b7-28f90003a13f","Type":"ContainerDied","Data":"85efe0956da622cabf62e98a4758060b648515635b9b7eba7a9ac181183526fa"} Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.745496 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85efe0956da622cabf62e98a4758060b648515635b9b7eba7a9ac181183526fa" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.746010 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.855037 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv"] Sep 29 10:21:53 crc kubenswrapper[4991]: E0929 10:21:53.855720 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="extract-content" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.855743 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="extract-content" Sep 29 10:21:53 crc kubenswrapper[4991]: E0929 10:21:53.855789 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d94097-7104-4678-87b7-28f90003a13f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.855799 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d94097-7104-4678-87b7-28f90003a13f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 29 10:21:53 crc kubenswrapper[4991]: E0929 10:21:53.855814 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="extract-utilities" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.855822 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="extract-utilities" Sep 29 10:21:53 crc kubenswrapper[4991]: E0929 10:21:53.855854 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="registry-server" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.855862 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="registry-server" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.856141 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d94097-7104-4678-87b7-28f90003a13f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.856172 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb70b38-0870-44d5-97ff-67b3522e22bc" containerName="registry-server" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.857519 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.860689 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.860825 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.861107 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.861172 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.861224 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.861338 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.862052 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Sep 29 10:21:53 crc kubenswrapper[4991]: I0929 10:21:53.890028 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv"] Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.007663 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflvt\" (UniqueName: \"kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.007828 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.007973 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008039 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008084 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008184 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008215 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008245 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.008300 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.113659 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.113740 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.113789 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.113889 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114113 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gflvt\" (UniqueName: \"kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114462 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114647 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114755 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114793 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.114881 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.117286 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.117718 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.118321 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.119712 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.120419 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.123115 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.123458 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.132762 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gflvt\" (UniqueName: \"kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bm9jv\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.180542 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.572539 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv"] Sep 29 10:21:54 crc kubenswrapper[4991]: I0929 10:21:54.756725 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" event={"ID":"cc36ac87-f748-4f5e-8510-c91a26bafce9","Type":"ContainerStarted","Data":"a3fbec004f6f9576d6ffc346370506172e24ffb944bc045013a739849827ec3a"} Sep 29 10:21:55 crc kubenswrapper[4991]: I0929 10:21:55.773679 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" event={"ID":"cc36ac87-f748-4f5e-8510-c91a26bafce9","Type":"ContainerStarted","Data":"64b266abaf99f084748ca827ec06defdd7e37a19618e1f095030873bff710126"} Sep 29 10:21:56 crc kubenswrapper[4991]: I0929 10:21:56.930560 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:21:56 crc kubenswrapper[4991]: E0929 10:21:56.931939 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:22:09 crc kubenswrapper[4991]: I0929 10:22:09.926359 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:22:09 crc kubenswrapper[4991]: E0929 10:22:09.927195 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.709025 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" podStartSLOduration=26.977104524 podStartE2EDuration="27.708981704s" podCreationTimestamp="2025-09-29 10:21:53 +0000 UTC" firstStartedPulling="2025-09-29 10:21:54.5834199 +0000 UTC m=+2650.439347928" lastFinishedPulling="2025-09-29 10:21:55.31529708 +0000 UTC m=+2651.171225108" observedRunningTime="2025-09-29 10:21:55.799675424 +0000 UTC m=+2651.655603472" watchObservedRunningTime="2025-09-29 10:22:20.708981704 +0000 UTC m=+2676.564909722" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.715765 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.719685 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.735542 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.864207 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jfc\" (UniqueName: \"kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.864413 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.864478 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.967082 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jfc\" (UniqueName: \"kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.967275 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.967827 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.967888 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.968251 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:20 crc kubenswrapper[4991]: I0929 10:22:20.988430 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jfc\" (UniqueName: \"kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc\") pod \"community-operators-zfprg\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:21 crc kubenswrapper[4991]: I0929 10:22:21.050622 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:21 crc kubenswrapper[4991]: W0929 10:22:21.588395 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064e700_e901_4f81_8ad7_07406c9dc565.slice/crio-578013b6bd88e37b950d1c6234e6c9d6f90da9521409a0c104725168cacdc731 WatchSource:0}: Error finding container 578013b6bd88e37b950d1c6234e6c9d6f90da9521409a0c104725168cacdc731: Status 404 returned error can't find the container with id 578013b6bd88e37b950d1c6234e6c9d6f90da9521409a0c104725168cacdc731 Sep 29 10:22:21 crc kubenswrapper[4991]: I0929 10:22:21.589962 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:22 crc kubenswrapper[4991]: I0929 10:22:22.035530 4991 generic.go:334] "Generic (PLEG): container finished" podID="8064e700-e901-4f81-8ad7-07406c9dc565" containerID="77423092d76cb20316c6f3910c1bd81944ce8bf08a48b037c736468ea89276a2" exitCode=0 Sep 29 10:22:22 crc kubenswrapper[4991]: I0929 10:22:22.035631 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerDied","Data":"77423092d76cb20316c6f3910c1bd81944ce8bf08a48b037c736468ea89276a2"} Sep 29 10:22:22 crc kubenswrapper[4991]: I0929 10:22:22.035930 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerStarted","Data":"578013b6bd88e37b950d1c6234e6c9d6f90da9521409a0c104725168cacdc731"} Sep 29 10:22:22 crc kubenswrapper[4991]: I0929 10:22:22.926988 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:22:22 crc kubenswrapper[4991]: E0929 10:22:22.927813 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:22:24 crc kubenswrapper[4991]: I0929 10:22:24.086528 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerStarted","Data":"7c449858277d502896f605d4b7a331cdde4b7837fdbadf426dcf8332091a78b9"} Sep 29 10:22:25 crc kubenswrapper[4991]: I0929 10:22:25.100649 4991 generic.go:334] "Generic (PLEG): container finished" podID="8064e700-e901-4f81-8ad7-07406c9dc565" containerID="7c449858277d502896f605d4b7a331cdde4b7837fdbadf426dcf8332091a78b9" exitCode=0 Sep 29 10:22:25 crc kubenswrapper[4991]: I0929 10:22:25.100720 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerDied","Data":"7c449858277d502896f605d4b7a331cdde4b7837fdbadf426dcf8332091a78b9"} Sep 29 10:22:26 crc kubenswrapper[4991]: I0929 10:22:26.111762 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerStarted","Data":"56cbec697b34683f5652160ad59dce05b4b4c763fc77765bd5960d755a2d565d"} Sep 29 10:22:26 crc kubenswrapper[4991]: I0929 10:22:26.134439 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zfprg" podStartSLOduration=2.461411411 podStartE2EDuration="6.134421182s" podCreationTimestamp="2025-09-29 10:22:20 +0000 UTC" firstStartedPulling="2025-09-29 10:22:22.040002677 +0000 UTC m=+2677.895930695" lastFinishedPulling="2025-09-29 10:22:25.713012438 +0000 UTC m=+2681.568940466" observedRunningTime="2025-09-29 10:22:26.132246365 +0000 UTC m=+2681.988174413" watchObservedRunningTime="2025-09-29 10:22:26.134421182 +0000 UTC m=+2681.990349220" Sep 29 10:22:31 crc kubenswrapper[4991]: I0929 10:22:31.050821 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:31 crc kubenswrapper[4991]: I0929 10:22:31.051339 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:31 crc kubenswrapper[4991]: I0929 10:22:31.103401 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:31 crc kubenswrapper[4991]: I0929 10:22:31.281311 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:34 crc kubenswrapper[4991]: I0929 10:22:34.901882 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:34 crc kubenswrapper[4991]: I0929 10:22:34.902608 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zfprg" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="registry-server" containerID="cri-o://56cbec697b34683f5652160ad59dce05b4b4c763fc77765bd5960d755a2d565d" gracePeriod=2 Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.298534 4991 generic.go:334] "Generic (PLEG): container finished" podID="8064e700-e901-4f81-8ad7-07406c9dc565" containerID="56cbec697b34683f5652160ad59dce05b4b4c763fc77765bd5960d755a2d565d" exitCode=0 Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.298807 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerDied","Data":"56cbec697b34683f5652160ad59dce05b4b4c763fc77765bd5960d755a2d565d"} Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.520170 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.578643 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content\") pod \"8064e700-e901-4f81-8ad7-07406c9dc565\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.578738 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities\") pod \"8064e700-e901-4f81-8ad7-07406c9dc565\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.578991 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jfc\" (UniqueName: \"kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc\") pod \"8064e700-e901-4f81-8ad7-07406c9dc565\" (UID: \"8064e700-e901-4f81-8ad7-07406c9dc565\") " Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.579821 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities" (OuterVolumeSpecName: "utilities") pod "8064e700-e901-4f81-8ad7-07406c9dc565" (UID: "8064e700-e901-4f81-8ad7-07406c9dc565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.592421 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc" (OuterVolumeSpecName: "kube-api-access-99jfc") pod "8064e700-e901-4f81-8ad7-07406c9dc565" (UID: "8064e700-e901-4f81-8ad7-07406c9dc565"). InnerVolumeSpecName "kube-api-access-99jfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.637639 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8064e700-e901-4f81-8ad7-07406c9dc565" (UID: "8064e700-e901-4f81-8ad7-07406c9dc565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.682129 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jfc\" (UniqueName: \"kubernetes.io/projected/8064e700-e901-4f81-8ad7-07406c9dc565-kube-api-access-99jfc\") on node \"crc\" DevicePath \"\"" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.682169 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:22:35 crc kubenswrapper[4991]: I0929 10:22:35.682182 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8064e700-e901-4f81-8ad7-07406c9dc565-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.312315 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfprg" event={"ID":"8064e700-e901-4f81-8ad7-07406c9dc565","Type":"ContainerDied","Data":"578013b6bd88e37b950d1c6234e6c9d6f90da9521409a0c104725168cacdc731"} Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.312448 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfprg" Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.312610 4991 scope.go:117] "RemoveContainer" containerID="56cbec697b34683f5652160ad59dce05b4b4c763fc77765bd5960d755a2d565d" Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.358833 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.360526 4991 scope.go:117] "RemoveContainer" containerID="7c449858277d502896f605d4b7a331cdde4b7837fdbadf426dcf8332091a78b9" Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.371929 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zfprg"] Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.390271 4991 scope.go:117] "RemoveContainer" containerID="77423092d76cb20316c6f3910c1bd81944ce8bf08a48b037c736468ea89276a2" Sep 29 10:22:36 crc kubenswrapper[4991]: I0929 10:22:36.940784 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" path="/var/lib/kubelet/pods/8064e700-e901-4f81-8ad7-07406c9dc565/volumes" Sep 29 10:22:37 crc kubenswrapper[4991]: I0929 10:22:37.925936 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:22:37 crc kubenswrapper[4991]: E0929 10:22:37.926797 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:22:51 crc kubenswrapper[4991]: I0929 10:22:51.926341 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:22:52 crc kubenswrapper[4991]: I0929 10:22:52.500005 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c"} Sep 29 10:25:07 crc kubenswrapper[4991]: I0929 10:25:07.947411 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:25:07 crc kubenswrapper[4991]: I0929 10:25:07.947935 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:25:08 crc kubenswrapper[4991]: I0929 10:25:08.000006 4991 generic.go:334] "Generic (PLEG): container finished" podID="cc36ac87-f748-4f5e-8510-c91a26bafce9" containerID="64b266abaf99f084748ca827ec06defdd7e37a19618e1f095030873bff710126" exitCode=0 Sep 29 10:25:08 crc kubenswrapper[4991]: I0929 10:25:08.000056 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" event={"ID":"cc36ac87-f748-4f5e-8510-c91a26bafce9","Type":"ContainerDied","Data":"64b266abaf99f084748ca827ec06defdd7e37a19618e1f095030873bff710126"} Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.529696 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658602 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658670 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658727 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658767 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gflvt\" (UniqueName: \"kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658823 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658872 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.658937 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.659033 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.659107 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0\") pod \"cc36ac87-f748-4f5e-8510-c91a26bafce9\" (UID: \"cc36ac87-f748-4f5e-8510-c91a26bafce9\") " Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.667582 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt" (OuterVolumeSpecName: "kube-api-access-gflvt") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "kube-api-access-gflvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.669749 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.697215 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.707812 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.707850 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.709597 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.712063 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.712967 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.713437 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory" (OuterVolumeSpecName: "inventory") pod "cc36ac87-f748-4f5e-8510-c91a26bafce9" (UID: "cc36ac87-f748-4f5e-8510-c91a26bafce9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.762095 4991 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.762557 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.763791 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.763927 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gflvt\" (UniqueName: \"kubernetes.io/projected/cc36ac87-f748-4f5e-8510-c91a26bafce9-kube-api-access-gflvt\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.764072 4991 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.764188 4991 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.764323 4991 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.764422 4991 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:09 crc kubenswrapper[4991]: I0929 10:25:09.764552 4991 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cc36ac87-f748-4f5e-8510-c91a26bafce9-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.035931 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" event={"ID":"cc36ac87-f748-4f5e-8510-c91a26bafce9","Type":"ContainerDied","Data":"a3fbec004f6f9576d6ffc346370506172e24ffb944bc045013a739849827ec3a"} Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.036009 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3fbec004f6f9576d6ffc346370506172e24ffb944bc045013a739849827ec3a" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.036104 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bm9jv" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.131971 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c"] Sep 29 10:25:10 crc kubenswrapper[4991]: E0929 10:25:10.132684 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc36ac87-f748-4f5e-8510-c91a26bafce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.132698 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc36ac87-f748-4f5e-8510-c91a26bafce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 29 10:25:10 crc kubenswrapper[4991]: E0929 10:25:10.132714 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="extract-content" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.132721 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="extract-content" Sep 29 10:25:10 crc kubenswrapper[4991]: E0929 10:25:10.132776 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="extract-utilities" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.132783 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="extract-utilities" Sep 29 10:25:10 crc kubenswrapper[4991]: E0929 10:25:10.132799 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="registry-server" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.132805 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="registry-server" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.133148 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc36ac87-f748-4f5e-8510-c91a26bafce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.133168 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064e700-e901-4f81-8ad7-07406c9dc565" containerName="registry-server" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.134089 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.140789 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.140913 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.141069 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.141317 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.141478 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.143937 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c"] Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176328 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176393 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176422 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27pxh\" (UniqueName: \"kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176502 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176602 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176743 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.176830 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279028 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279123 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279223 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279290 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279317 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279337 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27pxh\" (UniqueName: \"kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.279355 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.283814 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.283890 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.284334 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.285049 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.285756 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.295867 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.300816 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27pxh\" (UniqueName: \"kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:10 crc kubenswrapper[4991]: I0929 10:25:10.471795 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:25:11 crc kubenswrapper[4991]: I0929 10:25:11.027186 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c"] Sep 29 10:25:11 crc kubenswrapper[4991]: I0929 10:25:11.038561 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:25:11 crc kubenswrapper[4991]: I0929 10:25:11.054820 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" event={"ID":"ab589228-3979-4001-886d-8c94abef0c13","Type":"ContainerStarted","Data":"d2ffd99eaae97fb93d6aad42faecfd6eeb3ae93aa6306337052a1c8aa9243d98"} Sep 29 10:25:13 crc kubenswrapper[4991]: I0929 10:25:13.078709 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" event={"ID":"ab589228-3979-4001-886d-8c94abef0c13","Type":"ContainerStarted","Data":"85b9c22796ff9e3f713dc74dba1a92c062befb8e0ea0593c321cf5eb1a859bff"} Sep 29 10:25:13 crc kubenswrapper[4991]: I0929 10:25:13.112363 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" podStartSLOduration=2.176507605 podStartE2EDuration="3.112340091s" podCreationTimestamp="2025-09-29 10:25:10 +0000 UTC" firstStartedPulling="2025-09-29 10:25:11.038314666 +0000 UTC m=+2846.894242694" lastFinishedPulling="2025-09-29 10:25:11.974147152 +0000 UTC m=+2847.830075180" observedRunningTime="2025-09-29 10:25:13.096196018 +0000 UTC m=+2848.952124076" watchObservedRunningTime="2025-09-29 10:25:13.112340091 +0000 UTC m=+2848.968268119" Sep 29 10:25:37 crc kubenswrapper[4991]: I0929 10:25:37.947136 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:25:37 crc kubenswrapper[4991]: I0929 10:25:37.947675 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:26:07 crc kubenswrapper[4991]: I0929 10:26:07.947572 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:26:07 crc kubenswrapper[4991]: I0929 10:26:07.948244 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:26:07 crc kubenswrapper[4991]: I0929 10:26:07.948306 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:26:07 crc kubenswrapper[4991]: I0929 10:26:07.949570 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:26:07 crc kubenswrapper[4991]: I0929 10:26:07.949659 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c" gracePeriod=600 Sep 29 10:26:08 crc kubenswrapper[4991]: I0929 10:26:08.725456 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c" exitCode=0 Sep 29 10:26:08 crc kubenswrapper[4991]: I0929 10:26:08.725551 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c"} Sep 29 10:26:08 crc kubenswrapper[4991]: I0929 10:26:08.726050 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda"} Sep 29 10:26:08 crc kubenswrapper[4991]: I0929 10:26:08.726086 4991 scope.go:117] "RemoveContainer" containerID="c1d35ce2709e88c7efb0d0471a5178078d9a6baf9dc2c56558074ef597cb5188" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.447524 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.452341 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.466134 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.617924 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzz6\" (UniqueName: \"kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.618894 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.619283 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.720932 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqzz6\" (UniqueName: \"kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.721063 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.721192 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.721760 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.722398 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.746779 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqzz6\" (UniqueName: \"kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6\") pod \"certified-operators-jpxvb\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:34 crc kubenswrapper[4991]: I0929 10:26:34.778678 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:35 crc kubenswrapper[4991]: I0929 10:26:35.457777 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:36 crc kubenswrapper[4991]: I0929 10:26:36.018896 4991 generic.go:334] "Generic (PLEG): container finished" podID="649045ee-20af-4044-a1d4-80e8c7641121" containerID="4df2ac4f40cee39b92d393cefe0d0ee5eb946ed4528982ea28d0074e4f2b69e2" exitCode=0 Sep 29 10:26:36 crc kubenswrapper[4991]: I0929 10:26:36.019101 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerDied","Data":"4df2ac4f40cee39b92d393cefe0d0ee5eb946ed4528982ea28d0074e4f2b69e2"} Sep 29 10:26:36 crc kubenswrapper[4991]: I0929 10:26:36.019450 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerStarted","Data":"e7ba3b63e9b09b91623913def2f6da165f68cd9108c047e2f8fce5592e629c47"} Sep 29 10:26:38 crc kubenswrapper[4991]: I0929 10:26:38.042318 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerStarted","Data":"bd495cf04871c1ae59cf7a0f4014b1e15262d2b313486ec69a8441144ec486d2"} Sep 29 10:26:41 crc kubenswrapper[4991]: I0929 10:26:41.075222 4991 generic.go:334] "Generic (PLEG): container finished" podID="649045ee-20af-4044-a1d4-80e8c7641121" containerID="bd495cf04871c1ae59cf7a0f4014b1e15262d2b313486ec69a8441144ec486d2" exitCode=0 Sep 29 10:26:41 crc kubenswrapper[4991]: I0929 10:26:41.075395 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerDied","Data":"bd495cf04871c1ae59cf7a0f4014b1e15262d2b313486ec69a8441144ec486d2"} Sep 29 10:26:42 crc kubenswrapper[4991]: I0929 10:26:42.088673 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerStarted","Data":"f0fae135efb105117ef25f6574f165aafb0f20fb03f0d6e2fac6e90e2fdc7a80"} Sep 29 10:26:42 crc kubenswrapper[4991]: I0929 10:26:42.108820 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jpxvb" podStartSLOduration=2.377613614 podStartE2EDuration="8.1088052s" podCreationTimestamp="2025-09-29 10:26:34 +0000 UTC" firstStartedPulling="2025-09-29 10:26:36.021686387 +0000 UTC m=+2931.877614415" lastFinishedPulling="2025-09-29 10:26:41.752877973 +0000 UTC m=+2937.608806001" observedRunningTime="2025-09-29 10:26:42.104437435 +0000 UTC m=+2937.960365463" watchObservedRunningTime="2025-09-29 10:26:42.1088052 +0000 UTC m=+2937.964733228" Sep 29 10:26:44 crc kubenswrapper[4991]: I0929 10:26:44.779689 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:44 crc kubenswrapper[4991]: I0929 10:26:44.781030 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:44 crc kubenswrapper[4991]: I0929 10:26:44.833701 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:54 crc kubenswrapper[4991]: I0929 10:26:54.828987 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:54 crc kubenswrapper[4991]: I0929 10:26:54.890808 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:55 crc kubenswrapper[4991]: I0929 10:26:55.242102 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jpxvb" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="registry-server" containerID="cri-o://f0fae135efb105117ef25f6574f165aafb0f20fb03f0d6e2fac6e90e2fdc7a80" gracePeriod=2 Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.266890 4991 generic.go:334] "Generic (PLEG): container finished" podID="649045ee-20af-4044-a1d4-80e8c7641121" containerID="f0fae135efb105117ef25f6574f165aafb0f20fb03f0d6e2fac6e90e2fdc7a80" exitCode=0 Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.266973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerDied","Data":"f0fae135efb105117ef25f6574f165aafb0f20fb03f0d6e2fac6e90e2fdc7a80"} Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.267541 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxvb" event={"ID":"649045ee-20af-4044-a1d4-80e8c7641121","Type":"ContainerDied","Data":"e7ba3b63e9b09b91623913def2f6da165f68cd9108c047e2f8fce5592e629c47"} Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.267564 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ba3b63e9b09b91623913def2f6da165f68cd9108c047e2f8fce5592e629c47" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.339408 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.492177 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzz6\" (UniqueName: \"kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6\") pod \"649045ee-20af-4044-a1d4-80e8c7641121\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.492481 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content\") pod \"649045ee-20af-4044-a1d4-80e8c7641121\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.492584 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities\") pod \"649045ee-20af-4044-a1d4-80e8c7641121\" (UID: \"649045ee-20af-4044-a1d4-80e8c7641121\") " Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.493643 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities" (OuterVolumeSpecName: "utilities") pod "649045ee-20af-4044-a1d4-80e8c7641121" (UID: "649045ee-20af-4044-a1d4-80e8c7641121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.494716 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.501247 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6" (OuterVolumeSpecName: "kube-api-access-nqzz6") pod "649045ee-20af-4044-a1d4-80e8c7641121" (UID: "649045ee-20af-4044-a1d4-80e8c7641121"). InnerVolumeSpecName "kube-api-access-nqzz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.550703 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "649045ee-20af-4044-a1d4-80e8c7641121" (UID: "649045ee-20af-4044-a1d4-80e8c7641121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.595812 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqzz6\" (UniqueName: \"kubernetes.io/projected/649045ee-20af-4044-a1d4-80e8c7641121-kube-api-access-nqzz6\") on node \"crc\" DevicePath \"\"" Sep 29 10:26:56 crc kubenswrapper[4991]: I0929 10:26:56.595857 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/649045ee-20af-4044-a1d4-80e8c7641121-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:26:57 crc kubenswrapper[4991]: I0929 10:26:57.279250 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxvb" Sep 29 10:26:57 crc kubenswrapper[4991]: I0929 10:26:57.345034 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:57 crc kubenswrapper[4991]: I0929 10:26:57.359687 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jpxvb"] Sep 29 10:26:58 crc kubenswrapper[4991]: I0929 10:26:58.943900 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="649045ee-20af-4044-a1d4-80e8c7641121" path="/var/lib/kubelet/pods/649045ee-20af-4044-a1d4-80e8c7641121/volumes" Sep 29 10:27:41 crc kubenswrapper[4991]: I0929 10:27:41.832726 4991 generic.go:334] "Generic (PLEG): container finished" podID="ab589228-3979-4001-886d-8c94abef0c13" containerID="85b9c22796ff9e3f713dc74dba1a92c062befb8e0ea0593c321cf5eb1a859bff" exitCode=0 Sep 29 10:27:41 crc kubenswrapper[4991]: I0929 10:27:41.832811 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" event={"ID":"ab589228-3979-4001-886d-8c94abef0c13","Type":"ContainerDied","Data":"85b9c22796ff9e3f713dc74dba1a92c062befb8e0ea0593c321cf5eb1a859bff"} Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.415787 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582574 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582706 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27pxh\" (UniqueName: \"kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582838 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582881 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582906 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582935 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.582974 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key\") pod \"ab589228-3979-4001-886d-8c94abef0c13\" (UID: \"ab589228-3979-4001-886d-8c94abef0c13\") " Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.589123 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.589178 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh" (OuterVolumeSpecName: "kube-api-access-27pxh") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "kube-api-access-27pxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.619138 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.620053 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.620383 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.630725 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.631752 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory" (OuterVolumeSpecName: "inventory") pod "ab589228-3979-4001-886d-8c94abef0c13" (UID: "ab589228-3979-4001-886d-8c94abef0c13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685255 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685490 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685502 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685513 4991 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685522 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685530 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab589228-3979-4001-886d-8c94abef0c13-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.685540 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27pxh\" (UniqueName: \"kubernetes.io/projected/ab589228-3979-4001-886d-8c94abef0c13-kube-api-access-27pxh\") on node \"crc\" DevicePath \"\"" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.860516 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" event={"ID":"ab589228-3979-4001-886d-8c94abef0c13","Type":"ContainerDied","Data":"d2ffd99eaae97fb93d6aad42faecfd6eeb3ae93aa6306337052a1c8aa9243d98"} Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.860574 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ffd99eaae97fb93d6aad42faecfd6eeb3ae93aa6306337052a1c8aa9243d98" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.860650 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.979922 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5"] Sep 29 10:27:43 crc kubenswrapper[4991]: E0929 10:27:43.980504 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="registry-server" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980524 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="registry-server" Sep 29 10:27:43 crc kubenswrapper[4991]: E0929 10:27:43.980555 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="extract-content" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980561 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="extract-content" Sep 29 10:27:43 crc kubenswrapper[4991]: E0929 10:27:43.980575 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="extract-utilities" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980582 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="extract-utilities" Sep 29 10:27:43 crc kubenswrapper[4991]: E0929 10:27:43.980601 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab589228-3979-4001-886d-8c94abef0c13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980610 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab589228-3979-4001-886d-8c94abef0c13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980913 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab589228-3979-4001-886d-8c94abef0c13" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.980926 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="649045ee-20af-4044-a1d4-80e8c7641121" containerName="registry-server" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.981832 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.984281 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.985235 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.985238 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.985528 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.985653 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992129 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmq4t\" (UniqueName: \"kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992195 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992362 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992408 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992458 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992538 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:43 crc kubenswrapper[4991]: I0929 10:27:43.992582 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.002431 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5"] Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.093855 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.093928 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmq4t\" (UniqueName: \"kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.094063 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.094175 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.094209 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.094249 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.094271 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.098756 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.098758 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.099521 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.099760 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.105996 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.112910 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmq4t\" (UniqueName: \"kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.113117 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: E0929 10:27:44.195514 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab589228_3979_4001_886d_8c94abef0c13.slice/crio-d2ffd99eaae97fb93d6aad42faecfd6eeb3ae93aa6306337052a1c8aa9243d98\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab589228_3979_4001_886d_8c94abef0c13.slice\": RecentStats: unable to find data in memory cache]" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.308564 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:27:44 crc kubenswrapper[4991]: I0929 10:27:44.980731 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5"] Sep 29 10:27:45 crc kubenswrapper[4991]: I0929 10:27:45.701868 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:27:45 crc kubenswrapper[4991]: I0929 10:27:45.882559 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" event={"ID":"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b","Type":"ContainerStarted","Data":"fee382b4585b3fe3b36fb8532934ae5d2c5b8ceb964e647073aea6052ba06e16"} Sep 29 10:27:46 crc kubenswrapper[4991]: I0929 10:27:46.913934 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" event={"ID":"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b","Type":"ContainerStarted","Data":"12a7f0282b98aa89c591edf74c16e5aa0a55dc5e325405b9356d4e41907d432f"} Sep 29 10:27:46 crc kubenswrapper[4991]: I0929 10:27:46.944999 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" podStartSLOduration=3.234130058 podStartE2EDuration="3.944977247s" podCreationTimestamp="2025-09-29 10:27:43 +0000 UTC" firstStartedPulling="2025-09-29 10:27:44.98773222 +0000 UTC m=+3000.843660248" lastFinishedPulling="2025-09-29 10:27:45.698579409 +0000 UTC m=+3001.554507437" observedRunningTime="2025-09-29 10:27:46.940509389 +0000 UTC m=+3002.796437417" watchObservedRunningTime="2025-09-29 10:27:46.944977247 +0000 UTC m=+3002.800905275" Sep 29 10:28:37 crc kubenswrapper[4991]: I0929 10:28:37.947421 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:28:37 crc kubenswrapper[4991]: I0929 10:28:37.948163 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:29:03 crc kubenswrapper[4991]: I0929 10:29:03.975152 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:03 crc kubenswrapper[4991]: I0929 10:29:03.978546 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:03 crc kubenswrapper[4991]: I0929 10:29:03.989220 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.112033 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.112485 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.112878 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vd4\" (UniqueName: \"kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.215139 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.215224 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vd4\" (UniqueName: \"kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.215338 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.215848 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.215844 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.267750 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vd4\" (UniqueName: \"kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4\") pod \"redhat-marketplace-h8j8m\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.329619 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:04 crc kubenswrapper[4991]: I0929 10:29:04.866820 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:04 crc kubenswrapper[4991]: W0929 10:29:04.873528 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0126a581_fa3f_408b_8c20_853821e1d786.slice/crio-50045a4eac59eb7e5fde80fba407921c898e79ce93394430abfa9cfe62cfdc01 WatchSource:0}: Error finding container 50045a4eac59eb7e5fde80fba407921c898e79ce93394430abfa9cfe62cfdc01: Status 404 returned error can't find the container with id 50045a4eac59eb7e5fde80fba407921c898e79ce93394430abfa9cfe62cfdc01 Sep 29 10:29:05 crc kubenswrapper[4991]: I0929 10:29:05.875792 4991 generic.go:334] "Generic (PLEG): container finished" podID="0126a581-fa3f-408b-8c20-853821e1d786" containerID="8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff" exitCode=0 Sep 29 10:29:05 crc kubenswrapper[4991]: I0929 10:29:05.875852 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerDied","Data":"8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff"} Sep 29 10:29:05 crc kubenswrapper[4991]: I0929 10:29:05.875910 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerStarted","Data":"50045a4eac59eb7e5fde80fba407921c898e79ce93394430abfa9cfe62cfdc01"} Sep 29 10:29:06 crc kubenswrapper[4991]: I0929 10:29:06.887721 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerStarted","Data":"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1"} Sep 29 10:29:07 crc kubenswrapper[4991]: I0929 10:29:07.900554 4991 generic.go:334] "Generic (PLEG): container finished" podID="0126a581-fa3f-408b-8c20-853821e1d786" containerID="236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1" exitCode=0 Sep 29 10:29:07 crc kubenswrapper[4991]: I0929 10:29:07.900663 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerDied","Data":"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1"} Sep 29 10:29:07 crc kubenswrapper[4991]: I0929 10:29:07.946974 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:29:07 crc kubenswrapper[4991]: I0929 10:29:07.947041 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:29:08 crc kubenswrapper[4991]: I0929 10:29:08.952717 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerStarted","Data":"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea"} Sep 29 10:29:08 crc kubenswrapper[4991]: I0929 10:29:08.964682 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h8j8m" podStartSLOduration=3.307226558 podStartE2EDuration="5.964661735s" podCreationTimestamp="2025-09-29 10:29:03 +0000 UTC" firstStartedPulling="2025-09-29 10:29:05.878159313 +0000 UTC m=+3081.734087341" lastFinishedPulling="2025-09-29 10:29:08.53559449 +0000 UTC m=+3084.391522518" observedRunningTime="2025-09-29 10:29:08.959319245 +0000 UTC m=+3084.815247273" watchObservedRunningTime="2025-09-29 10:29:08.964661735 +0000 UTC m=+3084.820589763" Sep 29 10:29:14 crc kubenswrapper[4991]: I0929 10:29:14.330684 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:14 crc kubenswrapper[4991]: I0929 10:29:14.331526 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:14 crc kubenswrapper[4991]: I0929 10:29:14.387998 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:15 crc kubenswrapper[4991]: I0929 10:29:15.050774 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:15 crc kubenswrapper[4991]: I0929 10:29:15.106856 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.013519 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h8j8m" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="registry-server" containerID="cri-o://c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea" gracePeriod=2 Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.539534 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.653970 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9vd4\" (UniqueName: \"kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4\") pod \"0126a581-fa3f-408b-8c20-853821e1d786\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.654039 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities\") pod \"0126a581-fa3f-408b-8c20-853821e1d786\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.654085 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content\") pod \"0126a581-fa3f-408b-8c20-853821e1d786\" (UID: \"0126a581-fa3f-408b-8c20-853821e1d786\") " Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.655252 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities" (OuterVolumeSpecName: "utilities") pod "0126a581-fa3f-408b-8c20-853821e1d786" (UID: "0126a581-fa3f-408b-8c20-853821e1d786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.666554 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4" (OuterVolumeSpecName: "kube-api-access-k9vd4") pod "0126a581-fa3f-408b-8c20-853821e1d786" (UID: "0126a581-fa3f-408b-8c20-853821e1d786"). InnerVolumeSpecName "kube-api-access-k9vd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.670753 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0126a581-fa3f-408b-8c20-853821e1d786" (UID: "0126a581-fa3f-408b-8c20-853821e1d786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.756628 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9vd4\" (UniqueName: \"kubernetes.io/projected/0126a581-fa3f-408b-8c20-853821e1d786-kube-api-access-k9vd4\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.757252 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:17 crc kubenswrapper[4991]: I0929 10:29:17.757364 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0126a581-fa3f-408b-8c20-853821e1d786-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.028453 4991 generic.go:334] "Generic (PLEG): container finished" podID="0126a581-fa3f-408b-8c20-853821e1d786" containerID="c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea" exitCode=0 Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.028505 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerDied","Data":"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea"} Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.028533 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8j8m" event={"ID":"0126a581-fa3f-408b-8c20-853821e1d786","Type":"ContainerDied","Data":"50045a4eac59eb7e5fde80fba407921c898e79ce93394430abfa9cfe62cfdc01"} Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.028538 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8j8m" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.028549 4991 scope.go:117] "RemoveContainer" containerID="c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.056010 4991 scope.go:117] "RemoveContainer" containerID="236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.067759 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.077886 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8j8m"] Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.099296 4991 scope.go:117] "RemoveContainer" containerID="8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.173061 4991 scope.go:117] "RemoveContainer" containerID="c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea" Sep 29 10:29:18 crc kubenswrapper[4991]: E0929 10:29:18.173593 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea\": container with ID starting with c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea not found: ID does not exist" containerID="c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.173641 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea"} err="failed to get container status \"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea\": rpc error: code = NotFound desc = could not find container \"c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea\": container with ID starting with c3fd70dd51aee28c87e5d106ae1e4d20fc28fdab45aca24dc379c7d711c74fea not found: ID does not exist" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.173672 4991 scope.go:117] "RemoveContainer" containerID="236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1" Sep 29 10:29:18 crc kubenswrapper[4991]: E0929 10:29:18.174713 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1\": container with ID starting with 236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1 not found: ID does not exist" containerID="236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.174739 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1"} err="failed to get container status \"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1\": rpc error: code = NotFound desc = could not find container \"236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1\": container with ID starting with 236bb53704ae5a475fd8dfa40a1400dd0bd0b0b5177077aaa2e6462d6b9a27b1 not found: ID does not exist" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.174756 4991 scope.go:117] "RemoveContainer" containerID="8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff" Sep 29 10:29:18 crc kubenswrapper[4991]: E0929 10:29:18.175312 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff\": container with ID starting with 8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff not found: ID does not exist" containerID="8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.175337 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff"} err="failed to get container status \"8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff\": rpc error: code = NotFound desc = could not find container \"8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff\": container with ID starting with 8eb8cd48e79565ce8bb8a7002b0ab0edcb3ddd6046434acf78992b123fd2d6ff not found: ID does not exist" Sep 29 10:29:18 crc kubenswrapper[4991]: I0929 10:29:18.942168 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0126a581-fa3f-408b-8c20-853821e1d786" path="/var/lib/kubelet/pods/0126a581-fa3f-408b-8c20-853821e1d786/volumes" Sep 29 10:29:37 crc kubenswrapper[4991]: I0929 10:29:37.946469 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:29:37 crc kubenswrapper[4991]: I0929 10:29:37.947223 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:29:37 crc kubenswrapper[4991]: I0929 10:29:37.947293 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:29:37 crc kubenswrapper[4991]: I0929 10:29:37.948945 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:29:37 crc kubenswrapper[4991]: I0929 10:29:37.949096 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" gracePeriod=600 Sep 29 10:29:38 crc kubenswrapper[4991]: E0929 10:29:38.076361 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:29:38 crc kubenswrapper[4991]: I0929 10:29:38.269305 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" exitCode=0 Sep 29 10:29:38 crc kubenswrapper[4991]: I0929 10:29:38.269365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda"} Sep 29 10:29:38 crc kubenswrapper[4991]: I0929 10:29:38.269698 4991 scope.go:117] "RemoveContainer" containerID="46878dc34c33633b7422d680f8ddb04d99d92908cecec7db72fe7b48739d749c" Sep 29 10:29:38 crc kubenswrapper[4991]: I0929 10:29:38.270496 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:29:38 crc kubenswrapper[4991]: E0929 10:29:38.270814 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:29:49 crc kubenswrapper[4991]: I0929 10:29:49.926262 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:29:49 crc kubenswrapper[4991]: E0929 10:29:49.927081 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:29:55 crc kubenswrapper[4991]: I0929 10:29:55.488338 4991 generic.go:334] "Generic (PLEG): container finished" podID="e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" containerID="12a7f0282b98aa89c591edf74c16e5aa0a55dc5e325405b9356d4e41907d432f" exitCode=0 Sep 29 10:29:55 crc kubenswrapper[4991]: I0929 10:29:55.488379 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" event={"ID":"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b","Type":"ContainerDied","Data":"12a7f0282b98aa89c591edf74c16e5aa0a55dc5e325405b9356d4e41907d432f"} Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.029144 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.133976 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmq4t\" (UniqueName: \"kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134212 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134381 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134433 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134489 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134551 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.134590 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1\") pod \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\" (UID: \"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b\") " Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.141795 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.148210 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t" (OuterVolumeSpecName: "kube-api-access-lmq4t") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "kube-api-access-lmq4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.172283 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.172877 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory" (OuterVolumeSpecName: "inventory") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.177539 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.180781 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.196040 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" (UID: "e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237187 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237234 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237247 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237260 4991 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237275 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237290 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmq4t\" (UniqueName: \"kubernetes.io/projected/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-kube-api-access-lmq4t\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.237304 4991 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.516158 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" event={"ID":"e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b","Type":"ContainerDied","Data":"fee382b4585b3fe3b36fb8532934ae5d2c5b8ceb964e647073aea6052ba06e16"} Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.516879 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fee382b4585b3fe3b36fb8532934ae5d2c5b8ceb964e647073aea6052ba06e16" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.516267 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.638755 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6"] Sep 29 10:29:57 crc kubenswrapper[4991]: E0929 10:29:57.639608 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="registry-server" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.639640 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="registry-server" Sep 29 10:29:57 crc kubenswrapper[4991]: E0929 10:29:57.639699 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="extract-utilities" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.639712 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="extract-utilities" Sep 29 10:29:57 crc kubenswrapper[4991]: E0929 10:29:57.639768 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.639782 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Sep 29 10:29:57 crc kubenswrapper[4991]: E0929 10:29:57.639801 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="extract-content" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.639812 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="extract-content" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.640220 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="0126a581-fa3f-408b-8c20-853821e1d786" containerName="registry-server" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.640249 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.641830 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.645720 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.645839 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.645923 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.646059 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vtkz9" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.646150 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.656190 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6"] Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.775017 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.775367 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.775436 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqcm\" (UniqueName: \"kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.775600 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.775897 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.879004 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.879232 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.879272 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqcm\" (UniqueName: \"kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.879356 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.879517 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.885112 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.885172 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.885707 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.890592 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.915870 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqcm\" (UniqueName: \"kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-l8gw6\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:57 crc kubenswrapper[4991]: I0929 10:29:57.982243 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:29:58 crc kubenswrapper[4991]: I0929 10:29:58.611295 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6"] Sep 29 10:29:59 crc kubenswrapper[4991]: I0929 10:29:59.541700 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" event={"ID":"f4ad28ea-5728-404a-95fb-38793b064167","Type":"ContainerStarted","Data":"36d564376a1430cff5c18494c9d3b7d064f636914349cd311edb57f1e00e634b"} Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.174703 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt"] Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.177205 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.179910 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.181531 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.184431 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt"] Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.242675 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ffz\" (UniqueName: \"kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.242730 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.242801 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.345402 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.345657 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ffz\" (UniqueName: \"kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.345699 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.346356 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.360715 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.364381 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ffz\" (UniqueName: \"kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz\") pod \"collect-profiles-29319030-fnpwt\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.518446 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.926847 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:30:00 crc kubenswrapper[4991]: E0929 10:30:00.928188 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:30:00 crc kubenswrapper[4991]: W0929 10:30:00.992420 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3c413b_e822_46d6_b4a9_768a911355a3.slice/crio-d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84 WatchSource:0}: Error finding container d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84: Status 404 returned error can't find the container with id d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84 Sep 29 10:30:00 crc kubenswrapper[4991]: I0929 10:30:00.993285 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt"] Sep 29 10:30:01 crc kubenswrapper[4991]: I0929 10:30:01.568699 4991 generic.go:334] "Generic (PLEG): container finished" podID="ef3c413b-e822-46d6-b4a9-768a911355a3" containerID="c01d29bf2241ffff842552fdd4188722957e4b3d6bcb244d2684fb0b3ff91bdd" exitCode=0 Sep 29 10:30:01 crc kubenswrapper[4991]: I0929 10:30:01.568738 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" event={"ID":"ef3c413b-e822-46d6-b4a9-768a911355a3","Type":"ContainerDied","Data":"c01d29bf2241ffff842552fdd4188722957e4b3d6bcb244d2684fb0b3ff91bdd"} Sep 29 10:30:01 crc kubenswrapper[4991]: I0929 10:30:01.568763 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" event={"ID":"ef3c413b-e822-46d6-b4a9-768a911355a3","Type":"ContainerStarted","Data":"d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84"} Sep 29 10:30:02 crc kubenswrapper[4991]: I0929 10:30:02.581880 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" event={"ID":"f4ad28ea-5728-404a-95fb-38793b064167","Type":"ContainerStarted","Data":"0d676a6e2f7ad747e8e89f9ecb20755e47018c724d357098caaf2d887fd56007"} Sep 29 10:30:02 crc kubenswrapper[4991]: I0929 10:30:02.610077 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" podStartSLOduration=2.00996391 podStartE2EDuration="5.610043226s" podCreationTimestamp="2025-09-29 10:29:57 +0000 UTC" firstStartedPulling="2025-09-29 10:29:58.617432613 +0000 UTC m=+3134.473360641" lastFinishedPulling="2025-09-29 10:30:02.217511929 +0000 UTC m=+3138.073439957" observedRunningTime="2025-09-29 10:30:02.600874615 +0000 UTC m=+3138.456802663" watchObservedRunningTime="2025-09-29 10:30:02.610043226 +0000 UTC m=+3138.465971254" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.017497 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.126479 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6ffz\" (UniqueName: \"kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz\") pod \"ef3c413b-e822-46d6-b4a9-768a911355a3\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.126645 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume\") pod \"ef3c413b-e822-46d6-b4a9-768a911355a3\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.126696 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume\") pod \"ef3c413b-e822-46d6-b4a9-768a911355a3\" (UID: \"ef3c413b-e822-46d6-b4a9-768a911355a3\") " Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.127571 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef3c413b-e822-46d6-b4a9-768a911355a3" (UID: "ef3c413b-e822-46d6-b4a9-768a911355a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.133144 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef3c413b-e822-46d6-b4a9-768a911355a3" (UID: "ef3c413b-e822-46d6-b4a9-768a911355a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.133992 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz" (OuterVolumeSpecName: "kube-api-access-t6ffz") pod "ef3c413b-e822-46d6-b4a9-768a911355a3" (UID: "ef3c413b-e822-46d6-b4a9-768a911355a3"). InnerVolumeSpecName "kube-api-access-t6ffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.230534 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef3c413b-e822-46d6-b4a9-768a911355a3-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.230566 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef3c413b-e822-46d6-b4a9-768a911355a3-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.230576 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6ffz\" (UniqueName: \"kubernetes.io/projected/ef3c413b-e822-46d6-b4a9-768a911355a3-kube-api-access-t6ffz\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.596296 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.596327 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt" event={"ID":"ef3c413b-e822-46d6-b4a9-768a911355a3","Type":"ContainerDied","Data":"d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84"} Sep 29 10:30:03 crc kubenswrapper[4991]: I0929 10:30:03.596474 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02ba3f9ccac95b2052493b673d90547976f1e0428e4a25c4cb4200bb9816e84" Sep 29 10:30:04 crc kubenswrapper[4991]: I0929 10:30:04.111850 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc"] Sep 29 10:30:04 crc kubenswrapper[4991]: I0929 10:30:04.129877 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29318985-gwtmc"] Sep 29 10:30:04 crc kubenswrapper[4991]: I0929 10:30:04.953159 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf87699d-b4ef-43ee-a37c-bdf593a7fd24" path="/var/lib/kubelet/pods/bf87699d-b4ef-43ee-a37c-bdf593a7fd24/volumes" Sep 29 10:30:14 crc kubenswrapper[4991]: I0929 10:30:14.936010 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:30:14 crc kubenswrapper[4991]: E0929 10:30:14.936969 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:30:18 crc kubenswrapper[4991]: I0929 10:30:18.771721 4991 generic.go:334] "Generic (PLEG): container finished" podID="f4ad28ea-5728-404a-95fb-38793b064167" containerID="0d676a6e2f7ad747e8e89f9ecb20755e47018c724d357098caaf2d887fd56007" exitCode=0 Sep 29 10:30:18 crc kubenswrapper[4991]: I0929 10:30:18.771853 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" event={"ID":"f4ad28ea-5728-404a-95fb-38793b064167","Type":"ContainerDied","Data":"0d676a6e2f7ad747e8e89f9ecb20755e47018c724d357098caaf2d887fd56007"} Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.430098 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.534786 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1\") pod \"f4ad28ea-5728-404a-95fb-38793b064167\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.535006 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0\") pod \"f4ad28ea-5728-404a-95fb-38793b064167\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.535056 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory\") pod \"f4ad28ea-5728-404a-95fb-38793b064167\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.535096 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key\") pod \"f4ad28ea-5728-404a-95fb-38793b064167\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.535198 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnqcm\" (UniqueName: \"kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm\") pod \"f4ad28ea-5728-404a-95fb-38793b064167\" (UID: \"f4ad28ea-5728-404a-95fb-38793b064167\") " Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.540765 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm" (OuterVolumeSpecName: "kube-api-access-cnqcm") pod "f4ad28ea-5728-404a-95fb-38793b064167" (UID: "f4ad28ea-5728-404a-95fb-38793b064167"). InnerVolumeSpecName "kube-api-access-cnqcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.569532 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4ad28ea-5728-404a-95fb-38793b064167" (UID: "f4ad28ea-5728-404a-95fb-38793b064167"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.571886 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "f4ad28ea-5728-404a-95fb-38793b064167" (UID: "f4ad28ea-5728-404a-95fb-38793b064167"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.572141 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "f4ad28ea-5728-404a-95fb-38793b064167" (UID: "f4ad28ea-5728-404a-95fb-38793b064167"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.579353 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory" (OuterVolumeSpecName: "inventory") pod "f4ad28ea-5728-404a-95fb-38793b064167" (UID: "f4ad28ea-5728-404a-95fb-38793b064167"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.639002 4991 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.639054 4991 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-inventory\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.639064 4991 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.639073 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnqcm\" (UniqueName: \"kubernetes.io/projected/f4ad28ea-5728-404a-95fb-38793b064167-kube-api-access-cnqcm\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.639085 4991 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f4ad28ea-5728-404a-95fb-38793b064167-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.800686 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" event={"ID":"f4ad28ea-5728-404a-95fb-38793b064167","Type":"ContainerDied","Data":"36d564376a1430cff5c18494c9d3b7d064f636914349cd311edb57f1e00e634b"} Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.800740 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-l8gw6" Sep 29 10:30:20 crc kubenswrapper[4991]: I0929 10:30:20.800742 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d564376a1430cff5c18494c9d3b7d064f636914349cd311edb57f1e00e634b" Sep 29 10:30:23 crc kubenswrapper[4991]: I0929 10:30:23.346126 4991 scope.go:117] "RemoveContainer" containerID="87345c181673e5a718adfb85afa66e6c1373ee82fd5a9ef4d6927309fffab928" Sep 29 10:30:26 crc kubenswrapper[4991]: I0929 10:30:26.926240 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:30:26 crc kubenswrapper[4991]: E0929 10:30:26.926902 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:30:39 crc kubenswrapper[4991]: I0929 10:30:39.926452 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:30:39 crc kubenswrapper[4991]: E0929 10:30:39.927217 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:30:54 crc kubenswrapper[4991]: I0929 10:30:54.934098 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:30:54 crc kubenswrapper[4991]: E0929 10:30:54.935229 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:31:08 crc kubenswrapper[4991]: I0929 10:31:08.927232 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:31:08 crc kubenswrapper[4991]: E0929 10:31:08.928172 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:31:20 crc kubenswrapper[4991]: I0929 10:31:20.927102 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:31:20 crc kubenswrapper[4991]: E0929 10:31:20.927985 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:31:34 crc kubenswrapper[4991]: I0929 10:31:34.935758 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:31:34 crc kubenswrapper[4991]: E0929 10:31:34.936770 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:31:50 crc kubenswrapper[4991]: I0929 10:31:50.926432 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:31:50 crc kubenswrapper[4991]: E0929 10:31:50.928089 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:05 crc kubenswrapper[4991]: I0929 10:32:05.926573 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:32:05 crc kubenswrapper[4991]: E0929 10:32:05.927475 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:16 crc kubenswrapper[4991]: I0929 10:32:16.926925 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:32:16 crc kubenswrapper[4991]: E0929 10:32:16.928027 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.184213 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:32:18 crc kubenswrapper[4991]: E0929 10:32:18.187175 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ad28ea-5728-404a-95fb-38793b064167" containerName="logging-edpm-deployment-openstack-edpm-ipam" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.187206 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ad28ea-5728-404a-95fb-38793b064167" containerName="logging-edpm-deployment-openstack-edpm-ipam" Sep 29 10:32:18 crc kubenswrapper[4991]: E0929 10:32:18.187286 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3c413b-e822-46d6-b4a9-768a911355a3" containerName="collect-profiles" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.187295 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3c413b-e822-46d6-b4a9-768a911355a3" containerName="collect-profiles" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.187600 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3c413b-e822-46d6-b4a9-768a911355a3" containerName="collect-profiles" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.187631 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ad28ea-5728-404a-95fb-38793b064167" containerName="logging-edpm-deployment-openstack-edpm-ipam" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.190312 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.223182 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.301339 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.301420 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmslb\" (UniqueName: \"kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.301583 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.404447 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.404664 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.404882 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmslb\" (UniqueName: \"kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.405154 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.405299 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.427194 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmslb\" (UniqueName: \"kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb\") pod \"redhat-operators-g8fjm\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:18 crc kubenswrapper[4991]: I0929 10:32:18.511453 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:19 crc kubenswrapper[4991]: I0929 10:32:19.083776 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:32:19 crc kubenswrapper[4991]: I0929 10:32:19.189391 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerStarted","Data":"1641935370d8713c69e7396b4e5960600cc43ed3986080b8bef5079c90a5e6d3"} Sep 29 10:32:20 crc kubenswrapper[4991]: I0929 10:32:20.203398 4991 generic.go:334] "Generic (PLEG): container finished" podID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerID="19b52662ee8f7543d85f1120c33238493e867c14c1e5bb37fc4ce4bc23e5d262" exitCode=0 Sep 29 10:32:20 crc kubenswrapper[4991]: I0929 10:32:20.203437 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerDied","Data":"19b52662ee8f7543d85f1120c33238493e867c14c1e5bb37fc4ce4bc23e5d262"} Sep 29 10:32:20 crc kubenswrapper[4991]: I0929 10:32:20.205909 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:32:21 crc kubenswrapper[4991]: I0929 10:32:21.217085 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerStarted","Data":"7d46f80cf31a1c943e01e5fe472029e4dc73a0916cce6452379d1a434afd9569"} Sep 29 10:32:26 crc kubenswrapper[4991]: I0929 10:32:26.264856 4991 generic.go:334] "Generic (PLEG): container finished" podID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerID="7d46f80cf31a1c943e01e5fe472029e4dc73a0916cce6452379d1a434afd9569" exitCode=0 Sep 29 10:32:26 crc kubenswrapper[4991]: I0929 10:32:26.264935 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerDied","Data":"7d46f80cf31a1c943e01e5fe472029e4dc73a0916cce6452379d1a434afd9569"} Sep 29 10:32:27 crc kubenswrapper[4991]: I0929 10:32:27.280875 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerStarted","Data":"3ae7dee568c6177b312af8381cf490dde961ef7f8ed68e733f35cfabbaccf965"} Sep 29 10:32:27 crc kubenswrapper[4991]: I0929 10:32:27.308729 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8fjm" podStartSLOduration=2.7651847529999998 podStartE2EDuration="9.308707189s" podCreationTimestamp="2025-09-29 10:32:18 +0000 UTC" firstStartedPulling="2025-09-29 10:32:20.205710876 +0000 UTC m=+3276.061638904" lastFinishedPulling="2025-09-29 10:32:26.749233312 +0000 UTC m=+3282.605161340" observedRunningTime="2025-09-29 10:32:27.300219686 +0000 UTC m=+3283.156147754" watchObservedRunningTime="2025-09-29 10:32:27.308707189 +0000 UTC m=+3283.164635217" Sep 29 10:32:28 crc kubenswrapper[4991]: I0929 10:32:28.511618 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:28 crc kubenswrapper[4991]: I0929 10:32:28.511672 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:29 crc kubenswrapper[4991]: I0929 10:32:29.563183 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g8fjm" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" probeResult="failure" output=< Sep 29 10:32:29 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:32:29 crc kubenswrapper[4991]: > Sep 29 10:32:29 crc kubenswrapper[4991]: I0929 10:32:29.926495 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:32:29 crc kubenswrapper[4991]: E0929 10:32:29.926750 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:39 crc kubenswrapper[4991]: I0929 10:32:39.587633 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g8fjm" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" probeResult="failure" output=< Sep 29 10:32:39 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:32:39 crc kubenswrapper[4991]: > Sep 29 10:32:40 crc kubenswrapper[4991]: I0929 10:32:40.928645 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:32:40 crc kubenswrapper[4991]: E0929 10:32:40.929245 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:49 crc kubenswrapper[4991]: I0929 10:32:49.572774 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g8fjm" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" probeResult="failure" output=< Sep 29 10:32:49 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:32:49 crc kubenswrapper[4991]: > Sep 29 10:32:54 crc kubenswrapper[4991]: I0929 10:32:54.940592 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:32:54 crc kubenswrapper[4991]: E0929 10:32:54.941445 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:32:58 crc kubenswrapper[4991]: I0929 10:32:58.574001 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:58 crc kubenswrapper[4991]: I0929 10:32:58.633964 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:32:58 crc kubenswrapper[4991]: I0929 10:32:58.810552 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:32:59 crc kubenswrapper[4991]: I0929 10:32:59.700827 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8fjm" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" containerID="cri-o://3ae7dee568c6177b312af8381cf490dde961ef7f8ed68e733f35cfabbaccf965" gracePeriod=2 Sep 29 10:33:00 crc kubenswrapper[4991]: I0929 10:33:00.714018 4991 generic.go:334] "Generic (PLEG): container finished" podID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerID="3ae7dee568c6177b312af8381cf490dde961ef7f8ed68e733f35cfabbaccf965" exitCode=0 Sep 29 10:33:00 crc kubenswrapper[4991]: I0929 10:33:00.714343 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerDied","Data":"3ae7dee568c6177b312af8381cf490dde961ef7f8ed68e733f35cfabbaccf965"} Sep 29 10:33:00 crc kubenswrapper[4991]: I0929 10:33:00.714374 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fjm" event={"ID":"877ed0ab-0275-4bff-8d9c-712789b6a849","Type":"ContainerDied","Data":"1641935370d8713c69e7396b4e5960600cc43ed3986080b8bef5079c90a5e6d3"} Sep 29 10:33:00 crc kubenswrapper[4991]: I0929 10:33:00.714384 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1641935370d8713c69e7396b4e5960600cc43ed3986080b8bef5079c90a5e6d3" Sep 29 10:33:00 crc kubenswrapper[4991]: I0929 10:33:00.840889 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.024201 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content\") pod \"877ed0ab-0275-4bff-8d9c-712789b6a849\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.024309 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmslb\" (UniqueName: \"kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb\") pod \"877ed0ab-0275-4bff-8d9c-712789b6a849\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.024376 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities\") pod \"877ed0ab-0275-4bff-8d9c-712789b6a849\" (UID: \"877ed0ab-0275-4bff-8d9c-712789b6a849\") " Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.025251 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities" (OuterVolumeSpecName: "utilities") pod "877ed0ab-0275-4bff-8d9c-712789b6a849" (UID: "877ed0ab-0275-4bff-8d9c-712789b6a849"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.026067 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.034446 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb" (OuterVolumeSpecName: "kube-api-access-rmslb") pod "877ed0ab-0275-4bff-8d9c-712789b6a849" (UID: "877ed0ab-0275-4bff-8d9c-712789b6a849"). InnerVolumeSpecName "kube-api-access-rmslb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.129113 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmslb\" (UniqueName: \"kubernetes.io/projected/877ed0ab-0275-4bff-8d9c-712789b6a849-kube-api-access-rmslb\") on node \"crc\" DevicePath \"\"" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.134556 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "877ed0ab-0275-4bff-8d9c-712789b6a849" (UID: "877ed0ab-0275-4bff-8d9c-712789b6a849"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.230892 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877ed0ab-0275-4bff-8d9c-712789b6a849-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.725858 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fjm" Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.765239 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:33:01 crc kubenswrapper[4991]: I0929 10:33:01.777588 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8fjm"] Sep 29 10:33:02 crc kubenswrapper[4991]: I0929 10:33:02.948865 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" path="/var/lib/kubelet/pods/877ed0ab-0275-4bff-8d9c-712789b6a849/volumes" Sep 29 10:33:08 crc kubenswrapper[4991]: I0929 10:33:08.926234 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:33:08 crc kubenswrapper[4991]: E0929 10:33:08.927154 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:33:21 crc kubenswrapper[4991]: I0929 10:33:21.926100 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:33:21 crc kubenswrapper[4991]: E0929 10:33:21.926940 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:33:23 crc kubenswrapper[4991]: I0929 10:33:23.475989 4991 scope.go:117] "RemoveContainer" containerID="bd495cf04871c1ae59cf7a0f4014b1e15262d2b313486ec69a8441144ec486d2" Sep 29 10:33:23 crc kubenswrapper[4991]: I0929 10:33:23.505710 4991 scope.go:117] "RemoveContainer" containerID="4df2ac4f40cee39b92d393cefe0d0ee5eb946ed4528982ea28d0074e4f2b69e2" Sep 29 10:33:23 crc kubenswrapper[4991]: I0929 10:33:23.562559 4991 scope.go:117] "RemoveContainer" containerID="f0fae135efb105117ef25f6574f165aafb0f20fb03f0d6e2fac6e90e2fdc7a80" Sep 29 10:33:32 crc kubenswrapper[4991]: I0929 10:33:32.926916 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:33:32 crc kubenswrapper[4991]: E0929 10:33:32.927908 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.392179 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:33:33 crc kubenswrapper[4991]: E0929 10:33:33.392939 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="extract-content" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.392989 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="extract-content" Sep 29 10:33:33 crc kubenswrapper[4991]: E0929 10:33:33.393050 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="extract-utilities" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.393060 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="extract-utilities" Sep 29 10:33:33 crc kubenswrapper[4991]: E0929 10:33:33.393080 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.393089 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.393332 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="877ed0ab-0275-4bff-8d9c-712789b6a849" containerName="registry-server" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.395158 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.423990 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.492769 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.492877 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.492910 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.595307 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.595762 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.595922 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.596319 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.596762 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.621969 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9\") pod \"community-operators-hx6cp\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:33 crc kubenswrapper[4991]: I0929 10:33:33.722539 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:34 crc kubenswrapper[4991]: I0929 10:33:34.408915 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:33:35 crc kubenswrapper[4991]: I0929 10:33:35.110228 4991 generic.go:334] "Generic (PLEG): container finished" podID="f667b168-28f5-4992-be74-8c32879fe9e5" containerID="2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5" exitCode=0 Sep 29 10:33:35 crc kubenswrapper[4991]: I0929 10:33:35.110331 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerDied","Data":"2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5"} Sep 29 10:33:35 crc kubenswrapper[4991]: I0929 10:33:35.110589 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerStarted","Data":"9b8cf7379e6917c78c4b35c66ad3f0a041c63ca8ab40c0dde58db30132812670"} Sep 29 10:33:37 crc kubenswrapper[4991]: I0929 10:33:37.133806 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerStarted","Data":"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022"} Sep 29 10:33:43 crc kubenswrapper[4991]: I0929 10:33:43.926622 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:33:43 crc kubenswrapper[4991]: E0929 10:33:43.927486 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:33:48 crc kubenswrapper[4991]: I0929 10:33:48.252262 4991 generic.go:334] "Generic (PLEG): container finished" podID="f667b168-28f5-4992-be74-8c32879fe9e5" containerID="91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022" exitCode=0 Sep 29 10:33:48 crc kubenswrapper[4991]: I0929 10:33:48.252348 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerDied","Data":"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022"} Sep 29 10:33:50 crc kubenswrapper[4991]: I0929 10:33:50.275983 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerStarted","Data":"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d"} Sep 29 10:33:51 crc kubenswrapper[4991]: I0929 10:33:51.376146 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hx6cp" podStartSLOduration=3.647287316 podStartE2EDuration="18.37611771s" podCreationTimestamp="2025-09-29 10:33:33 +0000 UTC" firstStartedPulling="2025-09-29 10:33:35.112299205 +0000 UTC m=+3350.968227243" lastFinishedPulling="2025-09-29 10:33:49.841129609 +0000 UTC m=+3365.697057637" observedRunningTime="2025-09-29 10:33:51.366154158 +0000 UTC m=+3367.222082186" watchObservedRunningTime="2025-09-29 10:33:51.37611771 +0000 UTC m=+3367.232045738" Sep 29 10:33:53 crc kubenswrapper[4991]: I0929 10:33:53.722855 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:53 crc kubenswrapper[4991]: I0929 10:33:53.724676 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:33:54 crc kubenswrapper[4991]: I0929 10:33:54.778672 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hx6cp" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="registry-server" probeResult="failure" output=< Sep 29 10:33:54 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:33:54 crc kubenswrapper[4991]: > Sep 29 10:33:55 crc kubenswrapper[4991]: I0929 10:33:55.927288 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:33:55 crc kubenswrapper[4991]: E0929 10:33:55.928056 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:34:03 crc kubenswrapper[4991]: I0929 10:34:03.777473 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:34:03 crc kubenswrapper[4991]: I0929 10:34:03.833381 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:34:04 crc kubenswrapper[4991]: I0929 10:34:04.595166 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:34:05 crc kubenswrapper[4991]: I0929 10:34:05.501370 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hx6cp" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="registry-server" containerID="cri-o://6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d" gracePeriod=2 Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.086810 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.285080 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content\") pod \"f667b168-28f5-4992-be74-8c32879fe9e5\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.285189 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities\") pod \"f667b168-28f5-4992-be74-8c32879fe9e5\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.285690 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9\") pod \"f667b168-28f5-4992-be74-8c32879fe9e5\" (UID: \"f667b168-28f5-4992-be74-8c32879fe9e5\") " Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.286129 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities" (OuterVolumeSpecName: "utilities") pod "f667b168-28f5-4992-be74-8c32879fe9e5" (UID: "f667b168-28f5-4992-be74-8c32879fe9e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.286737 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.326414 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9" (OuterVolumeSpecName: "kube-api-access-9kbh9") pod "f667b168-28f5-4992-be74-8c32879fe9e5" (UID: "f667b168-28f5-4992-be74-8c32879fe9e5"). InnerVolumeSpecName "kube-api-access-9kbh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.376121 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f667b168-28f5-4992-be74-8c32879fe9e5" (UID: "f667b168-28f5-4992-be74-8c32879fe9e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.389203 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f667b168-28f5-4992-be74-8c32879fe9e5-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.389241 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kbh9\" (UniqueName: \"kubernetes.io/projected/f667b168-28f5-4992-be74-8c32879fe9e5-kube-api-access-9kbh9\") on node \"crc\" DevicePath \"\"" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.515165 4991 generic.go:334] "Generic (PLEG): container finished" podID="f667b168-28f5-4992-be74-8c32879fe9e5" containerID="6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d" exitCode=0 Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.515217 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerDied","Data":"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d"} Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.515253 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hx6cp" event={"ID":"f667b168-28f5-4992-be74-8c32879fe9e5","Type":"ContainerDied","Data":"9b8cf7379e6917c78c4b35c66ad3f0a041c63ca8ab40c0dde58db30132812670"} Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.515278 4991 scope.go:117] "RemoveContainer" containerID="6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.515285 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hx6cp" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.555167 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.558832 4991 scope.go:117] "RemoveContainer" containerID="91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.565651 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hx6cp"] Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.600965 4991 scope.go:117] "RemoveContainer" containerID="2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.642874 4991 scope.go:117] "RemoveContainer" containerID="6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d" Sep 29 10:34:06 crc kubenswrapper[4991]: E0929 10:34:06.643396 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d\": container with ID starting with 6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d not found: ID does not exist" containerID="6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.643482 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d"} err="failed to get container status \"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d\": rpc error: code = NotFound desc = could not find container \"6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d\": container with ID starting with 6fe3ea5a87292d551934059ee5cfe2d990c0a9070445f893a85e144877c0601d not found: ID does not exist" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.643531 4991 scope.go:117] "RemoveContainer" containerID="91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022" Sep 29 10:34:06 crc kubenswrapper[4991]: E0929 10:34:06.643926 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022\": container with ID starting with 91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022 not found: ID does not exist" containerID="91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.644085 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022"} err="failed to get container status \"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022\": rpc error: code = NotFound desc = could not find container \"91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022\": container with ID starting with 91f73c296b928f0b8a40902d3e1c584584d9766be9c637f0dadb90334afaf022 not found: ID does not exist" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.644119 4991 scope.go:117] "RemoveContainer" containerID="2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5" Sep 29 10:34:06 crc kubenswrapper[4991]: E0929 10:34:06.644604 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5\": container with ID starting with 2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5 not found: ID does not exist" containerID="2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.644646 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5"} err="failed to get container status \"2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5\": rpc error: code = NotFound desc = could not find container \"2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5\": container with ID starting with 2ee591182178427457562457d73e2327836e4bfd9c4361006b4fcc9101d324d5 not found: ID does not exist" Sep 29 10:34:06 crc kubenswrapper[4991]: I0929 10:34:06.941086 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" path="/var/lib/kubelet/pods/f667b168-28f5-4992-be74-8c32879fe9e5/volumes" Sep 29 10:34:07 crc kubenswrapper[4991]: I0929 10:34:07.927755 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:34:07 crc kubenswrapper[4991]: E0929 10:34:07.928231 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:34:21 crc kubenswrapper[4991]: I0929 10:34:21.927488 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:34:21 crc kubenswrapper[4991]: E0929 10:34:21.929817 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:34:36 crc kubenswrapper[4991]: I0929 10:34:36.926437 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:34:36 crc kubenswrapper[4991]: E0929 10:34:36.927528 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:34:51 crc kubenswrapper[4991]: I0929 10:34:51.926396 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:34:53 crc kubenswrapper[4991]: I0929 10:34:53.030547 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098"} Sep 29 10:37:07 crc kubenswrapper[4991]: I0929 10:37:07.947426 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:37:07 crc kubenswrapper[4991]: I0929 10:37:07.948008 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:37:37 crc kubenswrapper[4991]: I0929 10:37:37.947032 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:37:37 crc kubenswrapper[4991]: I0929 10:37:37.947512 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:38:07 crc kubenswrapper[4991]: I0929 10:38:07.947092 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:38:07 crc kubenswrapper[4991]: I0929 10:38:07.947622 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:38:07 crc kubenswrapper[4991]: I0929 10:38:07.947677 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:38:07 crc kubenswrapper[4991]: I0929 10:38:07.949798 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:38:07 crc kubenswrapper[4991]: I0929 10:38:07.949911 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098" gracePeriod=600 Sep 29 10:38:08 crc kubenswrapper[4991]: I0929 10:38:08.435936 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098" exitCode=0 Sep 29 10:38:08 crc kubenswrapper[4991]: I0929 10:38:08.436257 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098"} Sep 29 10:38:08 crc kubenswrapper[4991]: I0929 10:38:08.436290 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444"} Sep 29 10:38:08 crc kubenswrapper[4991]: I0929 10:38:08.436308 4991 scope.go:117] "RemoveContainer" containerID="4041d7d7e567ea95f66b5031873847fa665187d295ea821862d74510d1298dda" Sep 29 10:38:23 crc kubenswrapper[4991]: I0929 10:38:23.754675 4991 scope.go:117] "RemoveContainer" containerID="19b52662ee8f7543d85f1120c33238493e867c14c1e5bb37fc4ce4bc23e5d262" Sep 29 10:38:23 crc kubenswrapper[4991]: I0929 10:38:23.801024 4991 scope.go:117] "RemoveContainer" containerID="7d46f80cf31a1c943e01e5fe472029e4dc73a0916cce6452379d1a434afd9569" Sep 29 10:39:23 crc kubenswrapper[4991]: I0929 10:39:23.886355 4991 scope.go:117] "RemoveContainer" containerID="3ae7dee568c6177b312af8381cf490dde961ef7f8ed68e733f35cfabbaccf965" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.828402 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:39:52 crc kubenswrapper[4991]: E0929 10:39:52.829413 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="extract-content" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.829426 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="extract-content" Sep 29 10:39:52 crc kubenswrapper[4991]: E0929 10:39:52.829489 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="registry-server" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.829495 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="registry-server" Sep 29 10:39:52 crc kubenswrapper[4991]: E0929 10:39:52.829507 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="extract-utilities" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.829515 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="extract-utilities" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.829744 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f667b168-28f5-4992-be74-8c32879fe9e5" containerName="registry-server" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.831629 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.841619 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.859802 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.860320 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.860470 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtph\" (UniqueName: \"kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.962168 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.962261 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtph\" (UniqueName: \"kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.962340 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.962840 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.963408 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:52 crc kubenswrapper[4991]: I0929 10:39:52.984060 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtph\" (UniqueName: \"kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph\") pod \"redhat-marketplace-wwknk\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:53 crc kubenswrapper[4991]: I0929 10:39:53.156326 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:39:53 crc kubenswrapper[4991]: I0929 10:39:53.696363 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:39:54 crc kubenswrapper[4991]: I0929 10:39:54.610442 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerDied","Data":"464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359"} Sep 29 10:39:54 crc kubenswrapper[4991]: I0929 10:39:54.610277 4991 generic.go:334] "Generic (PLEG): container finished" podID="aea440dc-ff5a-4821-a740-08a69b474017" containerID="464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359" exitCode=0 Sep 29 10:39:54 crc kubenswrapper[4991]: I0929 10:39:54.610742 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerStarted","Data":"591d41a0f5788ae51d04a9f1d7ea256b0d03abafda7737ae111d30a3e902212e"} Sep 29 10:39:54 crc kubenswrapper[4991]: I0929 10:39:54.612077 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:39:55 crc kubenswrapper[4991]: I0929 10:39:55.626014 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerStarted","Data":"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7"} Sep 29 10:39:56 crc kubenswrapper[4991]: I0929 10:39:56.676684 4991 generic.go:334] "Generic (PLEG): container finished" podID="aea440dc-ff5a-4821-a740-08a69b474017" containerID="69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7" exitCode=0 Sep 29 10:39:56 crc kubenswrapper[4991]: I0929 10:39:56.676737 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerDied","Data":"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7"} Sep 29 10:39:57 crc kubenswrapper[4991]: I0929 10:39:57.690232 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerStarted","Data":"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff"} Sep 29 10:39:57 crc kubenswrapper[4991]: I0929 10:39:57.717978 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wwknk" podStartSLOduration=3.254183159 podStartE2EDuration="5.717940074s" podCreationTimestamp="2025-09-29 10:39:52 +0000 UTC" firstStartedPulling="2025-09-29 10:39:54.611823856 +0000 UTC m=+3730.467751874" lastFinishedPulling="2025-09-29 10:39:57.075580761 +0000 UTC m=+3732.931508789" observedRunningTime="2025-09-29 10:39:57.710231202 +0000 UTC m=+3733.566159240" watchObservedRunningTime="2025-09-29 10:39:57.717940074 +0000 UTC m=+3733.573868102" Sep 29 10:40:03 crc kubenswrapper[4991]: I0929 10:40:03.157486 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:03 crc kubenswrapper[4991]: I0929 10:40:03.158075 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:03 crc kubenswrapper[4991]: I0929 10:40:03.229218 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:03 crc kubenswrapper[4991]: I0929 10:40:03.807244 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:03 crc kubenswrapper[4991]: I0929 10:40:03.861738 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:40:05 crc kubenswrapper[4991]: I0929 10:40:05.783544 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wwknk" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="registry-server" containerID="cri-o://88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff" gracePeriod=2 Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.320389 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.411376 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content\") pod \"aea440dc-ff5a-4821-a740-08a69b474017\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.411656 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities\") pod \"aea440dc-ff5a-4821-a740-08a69b474017\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.411830 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwtph\" (UniqueName: \"kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph\") pod \"aea440dc-ff5a-4821-a740-08a69b474017\" (UID: \"aea440dc-ff5a-4821-a740-08a69b474017\") " Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.412611 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities" (OuterVolumeSpecName: "utilities") pod "aea440dc-ff5a-4821-a740-08a69b474017" (UID: "aea440dc-ff5a-4821-a740-08a69b474017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.417851 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph" (OuterVolumeSpecName: "kube-api-access-kwtph") pod "aea440dc-ff5a-4821-a740-08a69b474017" (UID: "aea440dc-ff5a-4821-a740-08a69b474017"). InnerVolumeSpecName "kube-api-access-kwtph". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.430149 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aea440dc-ff5a-4821-a740-08a69b474017" (UID: "aea440dc-ff5a-4821-a740-08a69b474017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.515590 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwtph\" (UniqueName: \"kubernetes.io/projected/aea440dc-ff5a-4821-a740-08a69b474017-kube-api-access-kwtph\") on node \"crc\" DevicePath \"\"" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.515656 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.515670 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aea440dc-ff5a-4821-a740-08a69b474017-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.809420 4991 generic.go:334] "Generic (PLEG): container finished" podID="aea440dc-ff5a-4821-a740-08a69b474017" containerID="88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff" exitCode=0 Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.809506 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwknk" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.809506 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerDied","Data":"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff"} Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.810136 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwknk" event={"ID":"aea440dc-ff5a-4821-a740-08a69b474017","Type":"ContainerDied","Data":"591d41a0f5788ae51d04a9f1d7ea256b0d03abafda7737ae111d30a3e902212e"} Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.810157 4991 scope.go:117] "RemoveContainer" containerID="88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.854144 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.856214 4991 scope.go:117] "RemoveContainer" containerID="69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.864915 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwknk"] Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.894205 4991 scope.go:117] "RemoveContainer" containerID="464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.942305 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea440dc-ff5a-4821-a740-08a69b474017" path="/var/lib/kubelet/pods/aea440dc-ff5a-4821-a740-08a69b474017/volumes" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.950515 4991 scope.go:117] "RemoveContainer" containerID="88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff" Sep 29 10:40:06 crc kubenswrapper[4991]: E0929 10:40:06.951068 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff\": container with ID starting with 88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff not found: ID does not exist" containerID="88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.951103 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff"} err="failed to get container status \"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff\": rpc error: code = NotFound desc = could not find container \"88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff\": container with ID starting with 88305f39dafeac00cac08e5f867446a5e61c20ba250cf96da08e1481e1b427ff not found: ID does not exist" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.951126 4991 scope.go:117] "RemoveContainer" containerID="69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7" Sep 29 10:40:06 crc kubenswrapper[4991]: E0929 10:40:06.951461 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7\": container with ID starting with 69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7 not found: ID does not exist" containerID="69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.951505 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7"} err="failed to get container status \"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7\": rpc error: code = NotFound desc = could not find container \"69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7\": container with ID starting with 69c87c51406e73cb69e91888bdc29b40286f0447ce601c6cdefcdd2e467c49f7 not found: ID does not exist" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.951532 4991 scope.go:117] "RemoveContainer" containerID="464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359" Sep 29 10:40:06 crc kubenswrapper[4991]: E0929 10:40:06.951861 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359\": container with ID starting with 464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359 not found: ID does not exist" containerID="464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359" Sep 29 10:40:06 crc kubenswrapper[4991]: I0929 10:40:06.951891 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359"} err="failed to get container status \"464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359\": rpc error: code = NotFound desc = could not find container \"464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359\": container with ID starting with 464ef8e69fd1ee621553a3d51dfd1c385298725efe4b07d40b5e1dfeb4c40359 not found: ID does not exist" Sep 29 10:40:37 crc kubenswrapper[4991]: I0929 10:40:37.947113 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:40:37 crc kubenswrapper[4991]: I0929 10:40:37.947679 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:41:07 crc kubenswrapper[4991]: I0929 10:41:07.946534 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:41:07 crc kubenswrapper[4991]: I0929 10:41:07.947254 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:41:37 crc kubenswrapper[4991]: I0929 10:41:37.947264 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:41:37 crc kubenswrapper[4991]: I0929 10:41:37.947867 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:41:37 crc kubenswrapper[4991]: I0929 10:41:37.947914 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:41:37 crc kubenswrapper[4991]: I0929 10:41:37.949028 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:41:37 crc kubenswrapper[4991]: I0929 10:41:37.949100 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" gracePeriod=600 Sep 29 10:41:38 crc kubenswrapper[4991]: E0929 10:41:38.090212 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:41:38 crc kubenswrapper[4991]: I0929 10:41:38.882981 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" exitCode=0 Sep 29 10:41:38 crc kubenswrapper[4991]: I0929 10:41:38.883054 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444"} Sep 29 10:41:38 crc kubenswrapper[4991]: I0929 10:41:38.883391 4991 scope.go:117] "RemoveContainer" containerID="80b608a3e163609bc7a58a850b4396230e970a3460cd6feb0966d7fb57d93098" Sep 29 10:41:38 crc kubenswrapper[4991]: I0929 10:41:38.884791 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:41:38 crc kubenswrapper[4991]: E0929 10:41:38.885402 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:41:52 crc kubenswrapper[4991]: I0929 10:41:52.927117 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:41:52 crc kubenswrapper[4991]: E0929 10:41:52.927805 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.244029 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:02 crc kubenswrapper[4991]: E0929 10:42:02.245232 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="registry-server" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.245250 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="registry-server" Sep 29 10:42:02 crc kubenswrapper[4991]: E0929 10:42:02.245310 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="extract-utilities" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.245319 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="extract-utilities" Sep 29 10:42:02 crc kubenswrapper[4991]: E0929 10:42:02.245350 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="extract-content" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.245358 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="extract-content" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.245647 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea440dc-ff5a-4821-a740-08a69b474017" containerName="registry-server" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.247904 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.313015 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.358723 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rhdh\" (UniqueName: \"kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.358877 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.358978 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.463032 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.463136 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.463403 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rhdh\" (UniqueName: \"kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.464228 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.464991 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.517163 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rhdh\" (UniqueName: \"kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh\") pod \"certified-operators-d5s59\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:02 crc kubenswrapper[4991]: I0929 10:42:02.629337 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:03 crc kubenswrapper[4991]: I0929 10:42:03.214758 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:04 crc kubenswrapper[4991]: I0929 10:42:04.167917 4991 generic.go:334] "Generic (PLEG): container finished" podID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerID="73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f" exitCode=0 Sep 29 10:42:04 crc kubenswrapper[4991]: I0929 10:42:04.167981 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerDied","Data":"73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f"} Sep 29 10:42:04 crc kubenswrapper[4991]: I0929 10:42:04.168240 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerStarted","Data":"9bce651c2eaf9c705ff78b5648751ed4270364dee07742e51cf3811f2b681f31"} Sep 29 10:42:06 crc kubenswrapper[4991]: I0929 10:42:06.194562 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerStarted","Data":"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee"} Sep 29 10:42:06 crc kubenswrapper[4991]: I0929 10:42:06.927008 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:42:06 crc kubenswrapper[4991]: E0929 10:42:06.927358 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:42:09 crc kubenswrapper[4991]: I0929 10:42:09.235533 4991 generic.go:334] "Generic (PLEG): container finished" podID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerID="75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee" exitCode=0 Sep 29 10:42:09 crc kubenswrapper[4991]: I0929 10:42:09.235591 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerDied","Data":"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee"} Sep 29 10:42:10 crc kubenswrapper[4991]: I0929 10:42:10.250972 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerStarted","Data":"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571"} Sep 29 10:42:10 crc kubenswrapper[4991]: I0929 10:42:10.294928 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d5s59" podStartSLOduration=2.7893442349999997 podStartE2EDuration="8.294897603s" podCreationTimestamp="2025-09-29 10:42:02 +0000 UTC" firstStartedPulling="2025-09-29 10:42:04.171081853 +0000 UTC m=+3860.027009881" lastFinishedPulling="2025-09-29 10:42:09.676635171 +0000 UTC m=+3865.532563249" observedRunningTime="2025-09-29 10:42:10.278569046 +0000 UTC m=+3866.134497094" watchObservedRunningTime="2025-09-29 10:42:10.294897603 +0000 UTC m=+3866.150825641" Sep 29 10:42:12 crc kubenswrapper[4991]: I0929 10:42:12.630125 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:12 crc kubenswrapper[4991]: I0929 10:42:12.630733 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:12 crc kubenswrapper[4991]: I0929 10:42:12.700355 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:19 crc kubenswrapper[4991]: I0929 10:42:19.926896 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:42:19 crc kubenswrapper[4991]: E0929 10:42:19.927753 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:42:22 crc kubenswrapper[4991]: I0929 10:42:22.690910 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:22 crc kubenswrapper[4991]: I0929 10:42:22.749046 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:23 crc kubenswrapper[4991]: I0929 10:42:23.414059 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d5s59" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="registry-server" containerID="cri-o://e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571" gracePeriod=2 Sep 29 10:42:23 crc kubenswrapper[4991]: I0929 10:42:23.942374 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.056701 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content\") pod \"8cda04b4-35b1-4267-babd-81e6d2761bf0\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.056758 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rhdh\" (UniqueName: \"kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh\") pod \"8cda04b4-35b1-4267-babd-81e6d2761bf0\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.056856 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities\") pod \"8cda04b4-35b1-4267-babd-81e6d2761bf0\" (UID: \"8cda04b4-35b1-4267-babd-81e6d2761bf0\") " Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.057837 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities" (OuterVolumeSpecName: "utilities") pod "8cda04b4-35b1-4267-babd-81e6d2761bf0" (UID: "8cda04b4-35b1-4267-babd-81e6d2761bf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.071371 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh" (OuterVolumeSpecName: "kube-api-access-5rhdh") pod "8cda04b4-35b1-4267-babd-81e6d2761bf0" (UID: "8cda04b4-35b1-4267-babd-81e6d2761bf0"). InnerVolumeSpecName "kube-api-access-5rhdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.101502 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cda04b4-35b1-4267-babd-81e6d2761bf0" (UID: "8cda04b4-35b1-4267-babd-81e6d2761bf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.160054 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.160114 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rhdh\" (UniqueName: \"kubernetes.io/projected/8cda04b4-35b1-4267-babd-81e6d2761bf0-kube-api-access-5rhdh\") on node \"crc\" DevicePath \"\"" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.160133 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cda04b4-35b1-4267-babd-81e6d2761bf0-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.426423 4991 generic.go:334] "Generic (PLEG): container finished" podID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerID="e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571" exitCode=0 Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.426533 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5s59" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.426517 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerDied","Data":"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571"} Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.427853 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5s59" event={"ID":"8cda04b4-35b1-4267-babd-81e6d2761bf0","Type":"ContainerDied","Data":"9bce651c2eaf9c705ff78b5648751ed4270364dee07742e51cf3811f2b681f31"} Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.427891 4991 scope.go:117] "RemoveContainer" containerID="e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.471702 4991 scope.go:117] "RemoveContainer" containerID="75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.472852 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.490384 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d5s59"] Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.496294 4991 scope.go:117] "RemoveContainer" containerID="73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.546820 4991 scope.go:117] "RemoveContainer" containerID="e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571" Sep 29 10:42:24 crc kubenswrapper[4991]: E0929 10:42:24.547304 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571\": container with ID starting with e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571 not found: ID does not exist" containerID="e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.547429 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571"} err="failed to get container status \"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571\": rpc error: code = NotFound desc = could not find container \"e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571\": container with ID starting with e74790306d20ceb13bc0722d5ec57f0afc0339748923a5f4db72e1f650f1c571 not found: ID does not exist" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.547757 4991 scope.go:117] "RemoveContainer" containerID="75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee" Sep 29 10:42:24 crc kubenswrapper[4991]: E0929 10:42:24.548115 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee\": container with ID starting with 75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee not found: ID does not exist" containerID="75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.548143 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee"} err="failed to get container status \"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee\": rpc error: code = NotFound desc = could not find container \"75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee\": container with ID starting with 75d3f37140eae5935381b86d2803a73c261c656dc73183432051e149f95ad7ee not found: ID does not exist" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.548163 4991 scope.go:117] "RemoveContainer" containerID="73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f" Sep 29 10:42:24 crc kubenswrapper[4991]: E0929 10:42:24.548419 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f\": container with ID starting with 73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f not found: ID does not exist" containerID="73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.548456 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f"} err="failed to get container status \"73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f\": rpc error: code = NotFound desc = could not find container \"73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f\": container with ID starting with 73ccc0bb980a5fcdb2de663d01d28100aafe79891086f8db97bea5f52b14f87f not found: ID does not exist" Sep 29 10:42:24 crc kubenswrapper[4991]: I0929 10:42:24.974175 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" path="/var/lib/kubelet/pods/8cda04b4-35b1-4267-babd-81e6d2761bf0/volumes" Sep 29 10:42:31 crc kubenswrapper[4991]: I0929 10:42:31.927858 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:42:31 crc kubenswrapper[4991]: E0929 10:42:31.928751 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:42:42 crc kubenswrapper[4991]: I0929 10:42:42.928024 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:42:42 crc kubenswrapper[4991]: E0929 10:42:42.929600 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:42:53 crc kubenswrapper[4991]: I0929 10:42:53.926689 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:42:53 crc kubenswrapper[4991]: E0929 10:42:53.927508 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:43:06 crc kubenswrapper[4991]: I0929 10:43:06.929105 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:43:06 crc kubenswrapper[4991]: E0929 10:43:06.932357 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.819409 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:08 crc kubenswrapper[4991]: E0929 10:43:08.821276 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="extract-utilities" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.821295 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="extract-utilities" Sep 29 10:43:08 crc kubenswrapper[4991]: E0929 10:43:08.821330 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="registry-server" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.821338 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="registry-server" Sep 29 10:43:08 crc kubenswrapper[4991]: E0929 10:43:08.821357 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="extract-content" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.821364 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="extract-content" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.821647 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cda04b4-35b1-4267-babd-81e6d2761bf0" containerName="registry-server" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.824878 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.834217 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.943263 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.943747 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrfh\" (UniqueName: \"kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:08 crc kubenswrapper[4991]: I0929 10:43:08.944151 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.046772 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrfh\" (UniqueName: \"kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.047522 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.047624 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.048489 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.048930 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.086862 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrfh\" (UniqueName: \"kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh\") pod \"redhat-operators-bdj2x\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.167319 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:09 crc kubenswrapper[4991]: I0929 10:43:09.699213 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:10 crc kubenswrapper[4991]: I0929 10:43:10.011526 4991 generic.go:334] "Generic (PLEG): container finished" podID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerID="0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f" exitCode=0 Sep 29 10:43:10 crc kubenswrapper[4991]: I0929 10:43:10.011593 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerDied","Data":"0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f"} Sep 29 10:43:10 crc kubenswrapper[4991]: I0929 10:43:10.011834 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerStarted","Data":"57706566e5b96dd4d3addef3878b0d148e4ceda194a968de64a15b18a1c5631a"} Sep 29 10:43:11 crc kubenswrapper[4991]: I0929 10:43:11.025391 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerStarted","Data":"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa"} Sep 29 10:43:16 crc kubenswrapper[4991]: I0929 10:43:16.078379 4991 generic.go:334] "Generic (PLEG): container finished" podID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerID="cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa" exitCode=0 Sep 29 10:43:16 crc kubenswrapper[4991]: I0929 10:43:16.078513 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerDied","Data":"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa"} Sep 29 10:43:17 crc kubenswrapper[4991]: I0929 10:43:17.092228 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerStarted","Data":"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6"} Sep 29 10:43:17 crc kubenswrapper[4991]: I0929 10:43:17.119358 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bdj2x" podStartSLOduration=2.654316721 podStartE2EDuration="9.119332693s" podCreationTimestamp="2025-09-29 10:43:08 +0000 UTC" firstStartedPulling="2025-09-29 10:43:10.013308354 +0000 UTC m=+3925.869236382" lastFinishedPulling="2025-09-29 10:43:16.478324326 +0000 UTC m=+3932.334252354" observedRunningTime="2025-09-29 10:43:17.109659899 +0000 UTC m=+3932.965587937" watchObservedRunningTime="2025-09-29 10:43:17.119332693 +0000 UTC m=+3932.975260721" Sep 29 10:43:18 crc kubenswrapper[4991]: I0929 10:43:18.926458 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:43:18 crc kubenswrapper[4991]: E0929 10:43:18.927317 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:43:19 crc kubenswrapper[4991]: I0929 10:43:19.168188 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:19 crc kubenswrapper[4991]: I0929 10:43:19.168308 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:20 crc kubenswrapper[4991]: I0929 10:43:20.248406 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdj2x" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" probeResult="failure" output=< Sep 29 10:43:20 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:43:20 crc kubenswrapper[4991]: > Sep 29 10:43:30 crc kubenswrapper[4991]: I0929 10:43:30.236657 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bdj2x" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" probeResult="failure" output=< Sep 29 10:43:30 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:43:30 crc kubenswrapper[4991]: > Sep 29 10:43:33 crc kubenswrapper[4991]: I0929 10:43:33.926238 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:43:33 crc kubenswrapper[4991]: E0929 10:43:33.927427 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:43:39 crc kubenswrapper[4991]: I0929 10:43:39.225840 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:39 crc kubenswrapper[4991]: I0929 10:43:39.285536 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:40 crc kubenswrapper[4991]: I0929 10:43:40.021233 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:40 crc kubenswrapper[4991]: I0929 10:43:40.334369 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bdj2x" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" containerID="cri-o://cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6" gracePeriod=2 Sep 29 10:43:40 crc kubenswrapper[4991]: I0929 10:43:40.891568 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.053382 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdrfh\" (UniqueName: \"kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh\") pod \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.053424 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities\") pod \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.053668 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content\") pod \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\" (UID: \"3e6a6d2f-85b7-4b5c-9a18-78072cb21145\") " Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.055339 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities" (OuterVolumeSpecName: "utilities") pod "3e6a6d2f-85b7-4b5c-9a18-78072cb21145" (UID: "3e6a6d2f-85b7-4b5c-9a18-78072cb21145"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.064760 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh" (OuterVolumeSpecName: "kube-api-access-wdrfh") pod "3e6a6d2f-85b7-4b5c-9a18-78072cb21145" (UID: "3e6a6d2f-85b7-4b5c-9a18-78072cb21145"). InnerVolumeSpecName "kube-api-access-wdrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.152049 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e6a6d2f-85b7-4b5c-9a18-78072cb21145" (UID: "3e6a6d2f-85b7-4b5c-9a18-78072cb21145"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.156572 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.156607 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdrfh\" (UniqueName: \"kubernetes.io/projected/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-kube-api-access-wdrfh\") on node \"crc\" DevicePath \"\"" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.156621 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6a6d2f-85b7-4b5c-9a18-78072cb21145-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.351990 4991 generic.go:334] "Generic (PLEG): container finished" podID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerID="cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6" exitCode=0 Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.352050 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerDied","Data":"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6"} Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.352095 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdj2x" event={"ID":"3e6a6d2f-85b7-4b5c-9a18-78072cb21145","Type":"ContainerDied","Data":"57706566e5b96dd4d3addef3878b0d148e4ceda194a968de64a15b18a1c5631a"} Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.352123 4991 scope.go:117] "RemoveContainer" containerID="cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.352307 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdj2x" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.397910 4991 scope.go:117] "RemoveContainer" containerID="cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.400070 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.412297 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bdj2x"] Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.421331 4991 scope.go:117] "RemoveContainer" containerID="0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.470785 4991 scope.go:117] "RemoveContainer" containerID="cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6" Sep 29 10:43:41 crc kubenswrapper[4991]: E0929 10:43:41.471211 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6\": container with ID starting with cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6 not found: ID does not exist" containerID="cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.471244 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6"} err="failed to get container status \"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6\": rpc error: code = NotFound desc = could not find container \"cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6\": container with ID starting with cabab99190b0dc628b41cc4f3f41fb68b86b2c86d35de5fcda2017a07be81ca6 not found: ID does not exist" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.471274 4991 scope.go:117] "RemoveContainer" containerID="cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa" Sep 29 10:43:41 crc kubenswrapper[4991]: E0929 10:43:41.471603 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa\": container with ID starting with cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa not found: ID does not exist" containerID="cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.471646 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa"} err="failed to get container status \"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa\": rpc error: code = NotFound desc = could not find container \"cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa\": container with ID starting with cf9ee296ee0b4ff77d0babbb57c67a24002312c598e68ff5653e493bebdfc0fa not found: ID does not exist" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.471673 4991 scope.go:117] "RemoveContainer" containerID="0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f" Sep 29 10:43:41 crc kubenswrapper[4991]: E0929 10:43:41.472102 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f\": container with ID starting with 0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f not found: ID does not exist" containerID="0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f" Sep 29 10:43:41 crc kubenswrapper[4991]: I0929 10:43:41.472134 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f"} err="failed to get container status \"0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f\": rpc error: code = NotFound desc = could not find container \"0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f\": container with ID starting with 0405a301665129e791f517fcc24130571cb2ec28085e9c8c8580c5269430f40f not found: ID does not exist" Sep 29 10:43:42 crc kubenswrapper[4991]: I0929 10:43:42.940213 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" path="/var/lib/kubelet/pods/3e6a6d2f-85b7-4b5c-9a18-78072cb21145/volumes" Sep 29 10:43:44 crc kubenswrapper[4991]: I0929 10:43:44.934377 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:43:44 crc kubenswrapper[4991]: E0929 10:43:44.936237 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:43:55 crc kubenswrapper[4991]: I0929 10:43:55.925916 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:43:55 crc kubenswrapper[4991]: E0929 10:43:55.926882 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.662936 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:09 crc kubenswrapper[4991]: E0929 10:44:09.663984 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.663996 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" Sep 29 10:44:09 crc kubenswrapper[4991]: E0929 10:44:09.664017 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="extract-utilities" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.664023 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="extract-utilities" Sep 29 10:44:09 crc kubenswrapper[4991]: E0929 10:44:09.664032 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="extract-content" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.664040 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="extract-content" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.664629 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6a6d2f-85b7-4b5c-9a18-78072cb21145" containerName="registry-server" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.666368 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.677073 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.844021 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljl9\" (UniqueName: \"kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.845196 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.845532 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.926283 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:44:09 crc kubenswrapper[4991]: E0929 10:44:09.926799 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.947664 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljl9\" (UniqueName: \"kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.947799 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.947922 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.948518 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.948541 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.973888 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljl9\" (UniqueName: \"kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9\") pod \"community-operators-h7h9s\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:09 crc kubenswrapper[4991]: I0929 10:44:09.996273 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:10 crc kubenswrapper[4991]: I0929 10:44:10.565775 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:10 crc kubenswrapper[4991]: I0929 10:44:10.707560 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerStarted","Data":"47f3560c162f068405f5a2ced4b3f7a3ead428369d321e3e03430c22a092532d"} Sep 29 10:44:11 crc kubenswrapper[4991]: I0929 10:44:11.717782 4991 generic.go:334] "Generic (PLEG): container finished" podID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerID="dc4e0e0ea96f7eab186394422c1b09926f2682423f02495c9096527a02648859" exitCode=0 Sep 29 10:44:11 crc kubenswrapper[4991]: I0929 10:44:11.717843 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerDied","Data":"dc4e0e0ea96f7eab186394422c1b09926f2682423f02495c9096527a02648859"} Sep 29 10:44:12 crc kubenswrapper[4991]: I0929 10:44:12.732535 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerStarted","Data":"7651160e7687101792aa1499a3f405c5147561a41398d488c66a78c0fff971a1"} Sep 29 10:44:13 crc kubenswrapper[4991]: I0929 10:44:13.755291 4991 generic.go:334] "Generic (PLEG): container finished" podID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerID="7651160e7687101792aa1499a3f405c5147561a41398d488c66a78c0fff971a1" exitCode=0 Sep 29 10:44:13 crc kubenswrapper[4991]: I0929 10:44:13.755385 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerDied","Data":"7651160e7687101792aa1499a3f405c5147561a41398d488c66a78c0fff971a1"} Sep 29 10:44:14 crc kubenswrapper[4991]: I0929 10:44:14.768144 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerStarted","Data":"e47a73d3ccfd0f127ab106a3f8d4b36fc8d3b3baf1a7f0efd3767ed844c7077c"} Sep 29 10:44:14 crc kubenswrapper[4991]: I0929 10:44:14.785493 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7h9s" podStartSLOduration=3.268999919 podStartE2EDuration="5.785467108s" podCreationTimestamp="2025-09-29 10:44:09 +0000 UTC" firstStartedPulling="2025-09-29 10:44:11.719621083 +0000 UTC m=+3987.575549111" lastFinishedPulling="2025-09-29 10:44:14.236088262 +0000 UTC m=+3990.092016300" observedRunningTime="2025-09-29 10:44:14.781784992 +0000 UTC m=+3990.637713010" watchObservedRunningTime="2025-09-29 10:44:14.785467108 +0000 UTC m=+3990.641395136" Sep 29 10:44:19 crc kubenswrapper[4991]: I0929 10:44:19.996904 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:19 crc kubenswrapper[4991]: I0929 10:44:19.997363 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:20 crc kubenswrapper[4991]: I0929 10:44:20.053553 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:20 crc kubenswrapper[4991]: I0929 10:44:20.886031 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:22 crc kubenswrapper[4991]: I0929 10:44:22.934286 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:44:22 crc kubenswrapper[4991]: E0929 10:44:22.936323 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:44:23 crc kubenswrapper[4991]: I0929 10:44:23.042869 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:23 crc kubenswrapper[4991]: I0929 10:44:23.043130 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7h9s" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="registry-server" containerID="cri-o://e47a73d3ccfd0f127ab106a3f8d4b36fc8d3b3baf1a7f0efd3767ed844c7077c" gracePeriod=2 Sep 29 10:44:23 crc kubenswrapper[4991]: I0929 10:44:23.875237 4991 generic.go:334] "Generic (PLEG): container finished" podID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerID="e47a73d3ccfd0f127ab106a3f8d4b36fc8d3b3baf1a7f0efd3767ed844c7077c" exitCode=0 Sep 29 10:44:23 crc kubenswrapper[4991]: I0929 10:44:23.875539 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerDied","Data":"e47a73d3ccfd0f127ab106a3f8d4b36fc8d3b3baf1a7f0efd3767ed844c7077c"} Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.077351 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.217012 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lljl9\" (UniqueName: \"kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9\") pod \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.217557 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities\") pod \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.217719 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content\") pod \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\" (UID: \"4f2c1528-dc9f-493f-82dd-b47e6d030d88\") " Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.218612 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities" (OuterVolumeSpecName: "utilities") pod "4f2c1528-dc9f-493f-82dd-b47e6d030d88" (UID: "4f2c1528-dc9f-493f-82dd-b47e6d030d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.227345 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9" (OuterVolumeSpecName: "kube-api-access-lljl9") pod "4f2c1528-dc9f-493f-82dd-b47e6d030d88" (UID: "4f2c1528-dc9f-493f-82dd-b47e6d030d88"). InnerVolumeSpecName "kube-api-access-lljl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.267906 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f2c1528-dc9f-493f-82dd-b47e6d030d88" (UID: "4f2c1528-dc9f-493f-82dd-b47e6d030d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.323307 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lljl9\" (UniqueName: \"kubernetes.io/projected/4f2c1528-dc9f-493f-82dd-b47e6d030d88-kube-api-access-lljl9\") on node \"crc\" DevicePath \"\"" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.323419 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.323431 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2c1528-dc9f-493f-82dd-b47e6d030d88-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.886423 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7h9s" event={"ID":"4f2c1528-dc9f-493f-82dd-b47e6d030d88","Type":"ContainerDied","Data":"47f3560c162f068405f5a2ced4b3f7a3ead428369d321e3e03430c22a092532d"} Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.886745 4991 scope.go:117] "RemoveContainer" containerID="e47a73d3ccfd0f127ab106a3f8d4b36fc8d3b3baf1a7f0efd3767ed844c7077c" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.886876 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7h9s" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.923033 4991 scope.go:117] "RemoveContainer" containerID="7651160e7687101792aa1499a3f405c5147561a41398d488c66a78c0fff971a1" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.943714 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.951802 4991 scope.go:117] "RemoveContainer" containerID="dc4e0e0ea96f7eab186394422c1b09926f2682423f02495c9096527a02648859" Sep 29 10:44:24 crc kubenswrapper[4991]: I0929 10:44:24.952270 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7h9s"] Sep 29 10:44:26 crc kubenswrapper[4991]: I0929 10:44:26.944267 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" path="/var/lib/kubelet/pods/4f2c1528-dc9f-493f-82dd-b47e6d030d88/volumes" Sep 29 10:44:37 crc kubenswrapper[4991]: I0929 10:44:37.926422 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:44:37 crc kubenswrapper[4991]: E0929 10:44:37.927637 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:44:48 crc kubenswrapper[4991]: I0929 10:44:48.926517 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:44:48 crc kubenswrapper[4991]: E0929 10:44:48.927374 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:44:59 crc kubenswrapper[4991]: I0929 10:44:59.926693 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:44:59 crc kubenswrapper[4991]: E0929 10:44:59.927474 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.189896 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc"] Sep 29 10:45:00 crc kubenswrapper[4991]: E0929 10:45:00.190542 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="extract-content" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.190559 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="extract-content" Sep 29 10:45:00 crc kubenswrapper[4991]: E0929 10:45:00.190588 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="extract-utilities" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.190597 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="extract-utilities" Sep 29 10:45:00 crc kubenswrapper[4991]: E0929 10:45:00.190622 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="registry-server" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.190629 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="registry-server" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.191022 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2c1528-dc9f-493f-82dd-b47e6d030d88" containerName="registry-server" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.192124 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.195006 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.195597 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.219658 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc"] Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.357699 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.357823 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.357888 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7d9d\" (UniqueName: \"kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.460541 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.460665 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.460735 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7d9d\" (UniqueName: \"kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.461656 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.468381 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.477223 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7d9d\" (UniqueName: \"kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d\") pod \"collect-profiles-29319045-wh8bc\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:00 crc kubenswrapper[4991]: I0929 10:45:00.520986 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:01 crc kubenswrapper[4991]: I0929 10:45:01.030527 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc"] Sep 29 10:45:01 crc kubenswrapper[4991]: W0929 10:45:01.043264 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccf30f60_fb14_43ee_b53f_29f9a37f45a3.slice/crio-52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79 WatchSource:0}: Error finding container 52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79: Status 404 returned error can't find the container with id 52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79 Sep 29 10:45:01 crc kubenswrapper[4991]: I0929 10:45:01.320064 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" event={"ID":"ccf30f60-fb14-43ee-b53f-29f9a37f45a3","Type":"ContainerStarted","Data":"41b4b3bf31b5ce3ce5e742dffb5ea228a07c7d063e93dc46886ac7e565d78ea5"} Sep 29 10:45:01 crc kubenswrapper[4991]: I0929 10:45:01.320325 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" event={"ID":"ccf30f60-fb14-43ee-b53f-29f9a37f45a3","Type":"ContainerStarted","Data":"52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79"} Sep 29 10:45:01 crc kubenswrapper[4991]: I0929 10:45:01.339679 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" podStartSLOduration=1.339659991 podStartE2EDuration="1.339659991s" podCreationTimestamp="2025-09-29 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 10:45:01.337107154 +0000 UTC m=+4037.193035222" watchObservedRunningTime="2025-09-29 10:45:01.339659991 +0000 UTC m=+4037.195588019" Sep 29 10:45:02 crc kubenswrapper[4991]: I0929 10:45:02.332585 4991 generic.go:334] "Generic (PLEG): container finished" podID="ccf30f60-fb14-43ee-b53f-29f9a37f45a3" containerID="41b4b3bf31b5ce3ce5e742dffb5ea228a07c7d063e93dc46886ac7e565d78ea5" exitCode=0 Sep 29 10:45:02 crc kubenswrapper[4991]: I0929 10:45:02.332667 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" event={"ID":"ccf30f60-fb14-43ee-b53f-29f9a37f45a3","Type":"ContainerDied","Data":"41b4b3bf31b5ce3ce5e742dffb5ea228a07c7d063e93dc46886ac7e565d78ea5"} Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.354889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" event={"ID":"ccf30f60-fb14-43ee-b53f-29f9a37f45a3","Type":"ContainerDied","Data":"52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79"} Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.355451 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e7ea650b430e6338290c3dd8a52bf8764688435a5e603e86a3a3a48b69df79" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.392660 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.551342 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume\") pod \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.551731 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7d9d\" (UniqueName: \"kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d\") pod \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.552123 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume\") pod \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\" (UID: \"ccf30f60-fb14-43ee-b53f-29f9a37f45a3\") " Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.552889 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccf30f60-fb14-43ee-b53f-29f9a37f45a3" (UID: "ccf30f60-fb14-43ee-b53f-29f9a37f45a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.557574 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccf30f60-fb14-43ee-b53f-29f9a37f45a3" (UID: "ccf30f60-fb14-43ee-b53f-29f9a37f45a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.559296 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d" (OuterVolumeSpecName: "kube-api-access-m7d9d") pod "ccf30f60-fb14-43ee-b53f-29f9a37f45a3" (UID: "ccf30f60-fb14-43ee-b53f-29f9a37f45a3"). InnerVolumeSpecName "kube-api-access-m7d9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.655140 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7d9d\" (UniqueName: \"kubernetes.io/projected/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-kube-api-access-m7d9d\") on node \"crc\" DevicePath \"\"" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.655183 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:45:04 crc kubenswrapper[4991]: I0929 10:45:04.655196 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccf30f60-fb14-43ee-b53f-29f9a37f45a3-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 10:45:05 crc kubenswrapper[4991]: I0929 10:45:05.367236 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc" Sep 29 10:45:05 crc kubenswrapper[4991]: I0929 10:45:05.481358 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8"] Sep 29 10:45:05 crc kubenswrapper[4991]: I0929 10:45:05.495704 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319000-pdns8"] Sep 29 10:45:06 crc kubenswrapper[4991]: I0929 10:45:06.939565 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d041a6c-578e-4634-bf56-333953aebbc5" path="/var/lib/kubelet/pods/2d041a6c-578e-4634-bf56-333953aebbc5/volumes" Sep 29 10:45:11 crc kubenswrapper[4991]: I0929 10:45:11.926512 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:45:11 crc kubenswrapper[4991]: E0929 10:45:11.927665 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:45:22 crc kubenswrapper[4991]: I0929 10:45:22.926134 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:45:22 crc kubenswrapper[4991]: E0929 10:45:22.926893 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:45:24 crc kubenswrapper[4991]: I0929 10:45:24.123041 4991 scope.go:117] "RemoveContainer" containerID="8a4acf166fee6ceea35801943b721e2ced81ac04b9c22bb7c85642faf0150855" Sep 29 10:45:37 crc kubenswrapper[4991]: I0929 10:45:37.927298 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:45:37 crc kubenswrapper[4991]: E0929 10:45:37.928288 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:45:49 crc kubenswrapper[4991]: I0929 10:45:49.926339 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:45:49 crc kubenswrapper[4991]: E0929 10:45:49.927117 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:46:03 crc kubenswrapper[4991]: I0929 10:46:03.925999 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:46:03 crc kubenswrapper[4991]: E0929 10:46:03.926659 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:46:14 crc kubenswrapper[4991]: I0929 10:46:14.935254 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:46:14 crc kubenswrapper[4991]: E0929 10:46:14.936204 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:46:28 crc kubenswrapper[4991]: I0929 10:46:28.933585 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:46:28 crc kubenswrapper[4991]: E0929 10:46:28.934700 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:46:41 crc kubenswrapper[4991]: I0929 10:46:41.929479 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:46:42 crc kubenswrapper[4991]: I0929 10:46:42.460789 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0"} Sep 29 10:47:04 crc kubenswrapper[4991]: E0929 10:47:04.829244 4991 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.184:46800->38.129.56.184:37001: write tcp 38.129.56.184:46800->38.129.56.184:37001: write: broken pipe Sep 29 10:48:28 crc kubenswrapper[4991]: E0929 10:48:28.888745 4991 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.184:56020->38.129.56.184:37001: write tcp 38.129.56.184:56020->38.129.56.184:37001: write: broken pipe Sep 29 10:49:07 crc kubenswrapper[4991]: I0929 10:49:07.947249 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:49:07 crc kubenswrapper[4991]: I0929 10:49:07.947899 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:49:37 crc kubenswrapper[4991]: I0929 10:49:37.947121 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:49:37 crc kubenswrapper[4991]: I0929 10:49:37.947648 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.444743 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:00 crc kubenswrapper[4991]: E0929 10:50:00.445861 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf30f60-fb14-43ee-b53f-29f9a37f45a3" containerName="collect-profiles" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.445878 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf30f60-fb14-43ee-b53f-29f9a37f45a3" containerName="collect-profiles" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.446169 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf30f60-fb14-43ee-b53f-29f9a37f45a3" containerName="collect-profiles" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.447923 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.467498 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.546342 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.546396 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.546426 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcp4w\" (UniqueName: \"kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.650737 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.650795 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.650837 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcp4w\" (UniqueName: \"kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.651487 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.651631 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.672663 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcp4w\" (UniqueName: \"kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w\") pod \"redhat-marketplace-npk6g\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:00 crc kubenswrapper[4991]: I0929 10:50:00.790880 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:01 crc kubenswrapper[4991]: I0929 10:50:01.389723 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:01 crc kubenswrapper[4991]: I0929 10:50:01.814827 4991 generic.go:334] "Generic (PLEG): container finished" podID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerID="c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae" exitCode=0 Sep 29 10:50:01 crc kubenswrapper[4991]: I0929 10:50:01.814927 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerDied","Data":"c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae"} Sep 29 10:50:01 crc kubenswrapper[4991]: I0929 10:50:01.815170 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerStarted","Data":"157ce84e3df6b9d885aa58dd272d81a8faa3eb3615ceec8232885da23a8e63fb"} Sep 29 10:50:01 crc kubenswrapper[4991]: I0929 10:50:01.817082 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:50:01 crc kubenswrapper[4991]: E0929 10:50:01.878337 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90079288_3cb5_47a2_8952_8ddf95e96a15.slice/crio-conmon-c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae.scope\": RecentStats: unable to find data in memory cache]" Sep 29 10:50:03 crc kubenswrapper[4991]: I0929 10:50:03.835137 4991 generic.go:334] "Generic (PLEG): container finished" podID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerID="6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5" exitCode=0 Sep 29 10:50:03 crc kubenswrapper[4991]: I0929 10:50:03.835231 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerDied","Data":"6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5"} Sep 29 10:50:05 crc kubenswrapper[4991]: I0929 10:50:05.860343 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerStarted","Data":"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32"} Sep 29 10:50:05 crc kubenswrapper[4991]: I0929 10:50:05.883541 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npk6g" podStartSLOduration=3.305963211 podStartE2EDuration="5.883517981s" podCreationTimestamp="2025-09-29 10:50:00 +0000 UTC" firstStartedPulling="2025-09-29 10:50:01.816844516 +0000 UTC m=+4337.672772534" lastFinishedPulling="2025-09-29 10:50:04.394399276 +0000 UTC m=+4340.250327304" observedRunningTime="2025-09-29 10:50:05.879450325 +0000 UTC m=+4341.735378363" watchObservedRunningTime="2025-09-29 10:50:05.883517981 +0000 UTC m=+4341.739446009" Sep 29 10:50:07 crc kubenswrapper[4991]: I0929 10:50:07.947120 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:50:07 crc kubenswrapper[4991]: I0929 10:50:07.947466 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:50:07 crc kubenswrapper[4991]: I0929 10:50:07.947508 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:50:07 crc kubenswrapper[4991]: I0929 10:50:07.948392 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:50:07 crc kubenswrapper[4991]: I0929 10:50:07.948463 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0" gracePeriod=600 Sep 29 10:50:08 crc kubenswrapper[4991]: I0929 10:50:08.897020 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0" exitCode=0 Sep 29 10:50:08 crc kubenswrapper[4991]: I0929 10:50:08.897052 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0"} Sep 29 10:50:08 crc kubenswrapper[4991]: I0929 10:50:08.897618 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3"} Sep 29 10:50:08 crc kubenswrapper[4991]: I0929 10:50:08.897637 4991 scope.go:117] "RemoveContainer" containerID="7ced171855f34c3ba10507f5c0f9c2db0bf574284fcd67e890e446e62b89a444" Sep 29 10:50:10 crc kubenswrapper[4991]: I0929 10:50:10.791999 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:10 crc kubenswrapper[4991]: I0929 10:50:10.792561 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:10 crc kubenswrapper[4991]: I0929 10:50:10.863600 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:10 crc kubenswrapper[4991]: I0929 10:50:10.972836 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:11 crc kubenswrapper[4991]: I0929 10:50:11.104801 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:12 crc kubenswrapper[4991]: I0929 10:50:12.946392 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npk6g" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="registry-server" containerID="cri-o://6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32" gracePeriod=2 Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.604382 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.665853 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcp4w\" (UniqueName: \"kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w\") pod \"90079288-3cb5-47a2-8952-8ddf95e96a15\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.665934 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content\") pod \"90079288-3cb5-47a2-8952-8ddf95e96a15\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.666077 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities\") pod \"90079288-3cb5-47a2-8952-8ddf95e96a15\" (UID: \"90079288-3cb5-47a2-8952-8ddf95e96a15\") " Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.669224 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities" (OuterVolumeSpecName: "utilities") pod "90079288-3cb5-47a2-8952-8ddf95e96a15" (UID: "90079288-3cb5-47a2-8952-8ddf95e96a15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.686728 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w" (OuterVolumeSpecName: "kube-api-access-rcp4w") pod "90079288-3cb5-47a2-8952-8ddf95e96a15" (UID: "90079288-3cb5-47a2-8952-8ddf95e96a15"). InnerVolumeSpecName "kube-api-access-rcp4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.687736 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90079288-3cb5-47a2-8952-8ddf95e96a15" (UID: "90079288-3cb5-47a2-8952-8ddf95e96a15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.769815 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcp4w\" (UniqueName: \"kubernetes.io/projected/90079288-3cb5-47a2-8952-8ddf95e96a15-kube-api-access-rcp4w\") on node \"crc\" DevicePath \"\"" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.769857 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.769868 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90079288-3cb5-47a2-8952-8ddf95e96a15-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.966264 4991 generic.go:334] "Generic (PLEG): container finished" podID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerID="6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32" exitCode=0 Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.966312 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerDied","Data":"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32"} Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.966341 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npk6g" event={"ID":"90079288-3cb5-47a2-8952-8ddf95e96a15","Type":"ContainerDied","Data":"157ce84e3df6b9d885aa58dd272d81a8faa3eb3615ceec8232885da23a8e63fb"} Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.966356 4991 scope.go:117] "RemoveContainer" containerID="6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.966558 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npk6g" Sep 29 10:50:13 crc kubenswrapper[4991]: I0929 10:50:13.992278 4991 scope.go:117] "RemoveContainer" containerID="6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.029152 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.041240 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npk6g"] Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.050837 4991 scope.go:117] "RemoveContainer" containerID="c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.095718 4991 scope.go:117] "RemoveContainer" containerID="6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32" Sep 29 10:50:14 crc kubenswrapper[4991]: E0929 10:50:14.096293 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32\": container with ID starting with 6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32 not found: ID does not exist" containerID="6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.096365 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32"} err="failed to get container status \"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32\": rpc error: code = NotFound desc = could not find container \"6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32\": container with ID starting with 6b8906f6b0c0e72419c4c49db19987918a0fe00780b5e1cd9eb5f10f73656e32 not found: ID does not exist" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.096398 4991 scope.go:117] "RemoveContainer" containerID="6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5" Sep 29 10:50:14 crc kubenswrapper[4991]: E0929 10:50:14.096760 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5\": container with ID starting with 6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5 not found: ID does not exist" containerID="6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.096805 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5"} err="failed to get container status \"6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5\": rpc error: code = NotFound desc = could not find container \"6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5\": container with ID starting with 6bc433ce4438a776eb60be417f35e09f7db070c1b6ce80f16bf27a517ca079f5 not found: ID does not exist" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.096835 4991 scope.go:117] "RemoveContainer" containerID="c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae" Sep 29 10:50:14 crc kubenswrapper[4991]: E0929 10:50:14.097087 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae\": container with ID starting with c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae not found: ID does not exist" containerID="c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.097117 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae"} err="failed to get container status \"c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae\": rpc error: code = NotFound desc = could not find container \"c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae\": container with ID starting with c29ee338543041a7f20f3ad7aca0e34a3ad648563aa4af2ee93eff2f594be1ae not found: ID does not exist" Sep 29 10:50:14 crc kubenswrapper[4991]: I0929 10:50:14.938129 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" path="/var/lib/kubelet/pods/90079288-3cb5-47a2-8952-8ddf95e96a15/volumes" Sep 29 10:52:37 crc kubenswrapper[4991]: I0929 10:52:37.946812 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:52:37 crc kubenswrapper[4991]: I0929 10:52:37.947518 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:53:07 crc kubenswrapper[4991]: I0929 10:53:07.948540 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:53:07 crc kubenswrapper[4991]: I0929 10:53:07.949294 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:53:37 crc kubenswrapper[4991]: I0929 10:53:37.947197 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 10:53:37 crc kubenswrapper[4991]: I0929 10:53:37.947712 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 10:53:37 crc kubenswrapper[4991]: I0929 10:53:37.947765 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 10:53:37 crc kubenswrapper[4991]: I0929 10:53:37.948686 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 10:53:37 crc kubenswrapper[4991]: I0929 10:53:37.948742 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" gracePeriod=600 Sep 29 10:53:38 crc kubenswrapper[4991]: E0929 10:53:38.082541 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:53:38 crc kubenswrapper[4991]: I0929 10:53:38.201062 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" exitCode=0 Sep 29 10:53:38 crc kubenswrapper[4991]: I0929 10:53:38.201114 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3"} Sep 29 10:53:38 crc kubenswrapper[4991]: I0929 10:53:38.201154 4991 scope.go:117] "RemoveContainer" containerID="ab7d0abd8427ca02df285bc178f4984ea4095b24010add656dd78f11769022b0" Sep 29 10:53:38 crc kubenswrapper[4991]: I0929 10:53:38.202106 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:53:38 crc kubenswrapper[4991]: E0929 10:53:38.202581 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:53:52 crc kubenswrapper[4991]: I0929 10:53:52.926317 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:53:52 crc kubenswrapper[4991]: E0929 10:53:52.927153 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.287927 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:02 crc kubenswrapper[4991]: E0929 10:54:02.289016 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="extract-content" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.289032 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="extract-content" Sep 29 10:54:02 crc kubenswrapper[4991]: E0929 10:54:02.289052 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="extract-utilities" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.289058 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="extract-utilities" Sep 29 10:54:02 crc kubenswrapper[4991]: E0929 10:54:02.289108 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="registry-server" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.289114 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="registry-server" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.289328 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="90079288-3cb5-47a2-8952-8ddf95e96a15" containerName="registry-server" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.291269 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.300428 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.400419 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8x9d\" (UniqueName: \"kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.400483 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.401056 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.502974 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8x9d\" (UniqueName: \"kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.503044 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.503221 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.503591 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.505100 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.524420 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8x9d\" (UniqueName: \"kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d\") pod \"redhat-operators-qgz98\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:02 crc kubenswrapper[4991]: I0929 10:54:02.622919 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:03 crc kubenswrapper[4991]: I0929 10:54:03.194435 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:03 crc kubenswrapper[4991]: I0929 10:54:03.484243 4991 generic.go:334] "Generic (PLEG): container finished" podID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerID="77db4cdf4d1bd24809c070be86032c7c56c4614379781b74be1b70e88eff6cd9" exitCode=0 Sep 29 10:54:03 crc kubenswrapper[4991]: I0929 10:54:03.484329 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerDied","Data":"77db4cdf4d1bd24809c070be86032c7c56c4614379781b74be1b70e88eff6cd9"} Sep 29 10:54:03 crc kubenswrapper[4991]: I0929 10:54:03.485294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerStarted","Data":"9f13a81702b9b94e5374f7afb289ec07c8b1eb818bf96885d6f73ad0aa49352e"} Sep 29 10:54:04 crc kubenswrapper[4991]: I0929 10:54:04.496728 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerStarted","Data":"402dd1ba71d171c3723f064fe800bec5ec16b2f3d073ccff47a33bed8d5a2ad1"} Sep 29 10:54:07 crc kubenswrapper[4991]: I0929 10:54:07.926160 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:54:07 crc kubenswrapper[4991]: E0929 10:54:07.927100 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:54:08 crc kubenswrapper[4991]: I0929 10:54:08.550577 4991 generic.go:334] "Generic (PLEG): container finished" podID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerID="402dd1ba71d171c3723f064fe800bec5ec16b2f3d073ccff47a33bed8d5a2ad1" exitCode=0 Sep 29 10:54:08 crc kubenswrapper[4991]: I0929 10:54:08.550623 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerDied","Data":"402dd1ba71d171c3723f064fe800bec5ec16b2f3d073ccff47a33bed8d5a2ad1"} Sep 29 10:54:09 crc kubenswrapper[4991]: I0929 10:54:09.564286 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerStarted","Data":"747ad5f37dd221674e39a62194ec6c6113301fb77262424aaedb1e89f971d953"} Sep 29 10:54:09 crc kubenswrapper[4991]: I0929 10:54:09.590244 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qgz98" podStartSLOduration=1.8065009440000002 podStartE2EDuration="7.590219962s" podCreationTimestamp="2025-09-29 10:54:02 +0000 UTC" firstStartedPulling="2025-09-29 10:54:03.486037702 +0000 UTC m=+4579.341965720" lastFinishedPulling="2025-09-29 10:54:09.26975671 +0000 UTC m=+4585.125684738" observedRunningTime="2025-09-29 10:54:09.580832816 +0000 UTC m=+4585.436760844" watchObservedRunningTime="2025-09-29 10:54:09.590219962 +0000 UTC m=+4585.446147990" Sep 29 10:54:12 crc kubenswrapper[4991]: I0929 10:54:12.623640 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:12 crc kubenswrapper[4991]: I0929 10:54:12.624189 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:13 crc kubenswrapper[4991]: I0929 10:54:13.672131 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qgz98" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="registry-server" probeResult="failure" output=< Sep 29 10:54:13 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 10:54:13 crc kubenswrapper[4991]: > Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.619941 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.623436 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.642162 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.776705 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.776837 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmzc\" (UniqueName: \"kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.776912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.878626 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmzc\" (UniqueName: \"kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.878740 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.878825 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.879204 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.879358 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.898070 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmzc\" (UniqueName: \"kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc\") pod \"community-operators-hvc4j\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:17 crc kubenswrapper[4991]: I0929 10:54:17.946835 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:18 crc kubenswrapper[4991]: I0929 10:54:18.611588 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:18 crc kubenswrapper[4991]: I0929 10:54:18.675450 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerStarted","Data":"4d5e71aecf4492c3880883606b086787d336adf489e48e9be6dcf68934103da0"} Sep 29 10:54:19 crc kubenswrapper[4991]: I0929 10:54:19.687221 4991 generic.go:334] "Generic (PLEG): container finished" podID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerID="f4e06c0dbca0343a6c4e273dbc400fa859f78f4fd9e331eaf9a5fefadff74cfe" exitCode=0 Sep 29 10:54:19 crc kubenswrapper[4991]: I0929 10:54:19.687309 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerDied","Data":"f4e06c0dbca0343a6c4e273dbc400fa859f78f4fd9e331eaf9a5fefadff74cfe"} Sep 29 10:54:19 crc kubenswrapper[4991]: I0929 10:54:19.927354 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:54:19 crc kubenswrapper[4991]: E0929 10:54:19.928469 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:54:20 crc kubenswrapper[4991]: I0929 10:54:20.700652 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerStarted","Data":"dcb00c5b4c1c35b8df494404aa634a297b19d264a00e07d357a2bcb9250dcb25"} Sep 29 10:54:22 crc kubenswrapper[4991]: I0929 10:54:22.716804 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:22 crc kubenswrapper[4991]: I0929 10:54:22.725342 4991 generic.go:334] "Generic (PLEG): container finished" podID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerID="dcb00c5b4c1c35b8df494404aa634a297b19d264a00e07d357a2bcb9250dcb25" exitCode=0 Sep 29 10:54:22 crc kubenswrapper[4991]: I0929 10:54:22.725390 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerDied","Data":"dcb00c5b4c1c35b8df494404aa634a297b19d264a00e07d357a2bcb9250dcb25"} Sep 29 10:54:22 crc kubenswrapper[4991]: I0929 10:54:22.792650 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:23 crc kubenswrapper[4991]: I0929 10:54:23.736803 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerStarted","Data":"e059c36846d5395907a90b5cea269884aad236cefe923a59e83588a5ff5b4264"} Sep 29 10:54:23 crc kubenswrapper[4991]: I0929 10:54:23.773354 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvc4j" podStartSLOduration=3.224104084 podStartE2EDuration="6.773332148s" podCreationTimestamp="2025-09-29 10:54:17 +0000 UTC" firstStartedPulling="2025-09-29 10:54:19.689631977 +0000 UTC m=+4595.545560005" lastFinishedPulling="2025-09-29 10:54:23.238860041 +0000 UTC m=+4599.094788069" observedRunningTime="2025-09-29 10:54:23.760139452 +0000 UTC m=+4599.616067520" watchObservedRunningTime="2025-09-29 10:54:23.773332148 +0000 UTC m=+4599.629260176" Sep 29 10:54:25 crc kubenswrapper[4991]: I0929 10:54:25.183676 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:25 crc kubenswrapper[4991]: I0929 10:54:25.184263 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qgz98" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="registry-server" containerID="cri-o://747ad5f37dd221674e39a62194ec6c6113301fb77262424aaedb1e89f971d953" gracePeriod=2 Sep 29 10:54:25 crc kubenswrapper[4991]: I0929 10:54:25.761506 4991 generic.go:334] "Generic (PLEG): container finished" podID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerID="747ad5f37dd221674e39a62194ec6c6113301fb77262424aaedb1e89f971d953" exitCode=0 Sep 29 10:54:25 crc kubenswrapper[4991]: I0929 10:54:25.761639 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerDied","Data":"747ad5f37dd221674e39a62194ec6c6113301fb77262424aaedb1e89f971d953"} Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.409361 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.531143 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8x9d\" (UniqueName: \"kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d\") pod \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.531424 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content\") pod \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.531476 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities\") pod \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\" (UID: \"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16\") " Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.538915 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities" (OuterVolumeSpecName: "utilities") pod "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" (UID: "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.549194 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d" (OuterVolumeSpecName: "kube-api-access-c8x9d") pod "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" (UID: "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16"). InnerVolumeSpecName "kube-api-access-c8x9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.608915 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" (UID: "c3ef838d-c8f4-4ee5-8035-f21a22cf5e16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.635100 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8x9d\" (UniqueName: \"kubernetes.io/projected/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-kube-api-access-c8x9d\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.635154 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.635170 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.775031 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgz98" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.775030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgz98" event={"ID":"c3ef838d-c8f4-4ee5-8035-f21a22cf5e16","Type":"ContainerDied","Data":"9f13a81702b9b94e5374f7afb289ec07c8b1eb818bf96885d6f73ad0aa49352e"} Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.775112 4991 scope.go:117] "RemoveContainer" containerID="747ad5f37dd221674e39a62194ec6c6113301fb77262424aaedb1e89f971d953" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.813676 4991 scope.go:117] "RemoveContainer" containerID="402dd1ba71d171c3723f064fe800bec5ec16b2f3d073ccff47a33bed8d5a2ad1" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.818541 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.829340 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qgz98"] Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.834463 4991 scope.go:117] "RemoveContainer" containerID="77db4cdf4d1bd24809c070be86032c7c56c4614379781b74be1b70e88eff6cd9" Sep 29 10:54:26 crc kubenswrapper[4991]: I0929 10:54:26.938233 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" path="/var/lib/kubelet/pods/c3ef838d-c8f4-4ee5-8035-f21a22cf5e16/volumes" Sep 29 10:54:27 crc kubenswrapper[4991]: I0929 10:54:27.947283 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:27 crc kubenswrapper[4991]: I0929 10:54:27.947810 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:27 crc kubenswrapper[4991]: I0929 10:54:27.998596 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:28 crc kubenswrapper[4991]: I0929 10:54:28.846424 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:29 crc kubenswrapper[4991]: I0929 10:54:29.181434 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:30 crc kubenswrapper[4991]: I0929 10:54:30.821699 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvc4j" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="registry-server" containerID="cri-o://e059c36846d5395907a90b5cea269884aad236cefe923a59e83588a5ff5b4264" gracePeriod=2 Sep 29 10:54:31 crc kubenswrapper[4991]: I0929 10:54:31.834410 4991 generic.go:334] "Generic (PLEG): container finished" podID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerID="e059c36846d5395907a90b5cea269884aad236cefe923a59e83588a5ff5b4264" exitCode=0 Sep 29 10:54:31 crc kubenswrapper[4991]: I0929 10:54:31.834799 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerDied","Data":"e059c36846d5395907a90b5cea269884aad236cefe923a59e83588a5ff5b4264"} Sep 29 10:54:31 crc kubenswrapper[4991]: I0929 10:54:31.926702 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:54:31 crc kubenswrapper[4991]: E0929 10:54:31.927481 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.369766 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.525396 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities\") pod \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.525435 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmzc\" (UniqueName: \"kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc\") pod \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.525630 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content\") pod \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\" (UID: \"bc04f5d4-c5e2-42e7-aec1-29717b3d301c\") " Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.526492 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities" (OuterVolumeSpecName: "utilities") pod "bc04f5d4-c5e2-42e7-aec1-29717b3d301c" (UID: "bc04f5d4-c5e2-42e7-aec1-29717b3d301c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.531458 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc" (OuterVolumeSpecName: "kube-api-access-fcmzc") pod "bc04f5d4-c5e2-42e7-aec1-29717b3d301c" (UID: "bc04f5d4-c5e2-42e7-aec1-29717b3d301c"). InnerVolumeSpecName "kube-api-access-fcmzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.579418 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc04f5d4-c5e2-42e7-aec1-29717b3d301c" (UID: "bc04f5d4-c5e2-42e7-aec1-29717b3d301c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.629804 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.629868 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.629883 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmzc\" (UniqueName: \"kubernetes.io/projected/bc04f5d4-c5e2-42e7-aec1-29717b3d301c-kube-api-access-fcmzc\") on node \"crc\" DevicePath \"\"" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.850356 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvc4j" event={"ID":"bc04f5d4-c5e2-42e7-aec1-29717b3d301c","Type":"ContainerDied","Data":"4d5e71aecf4492c3880883606b086787d336adf489e48e9be6dcf68934103da0"} Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.851273 4991 scope.go:117] "RemoveContainer" containerID="e059c36846d5395907a90b5cea269884aad236cefe923a59e83588a5ff5b4264" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.850460 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvc4j" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.902129 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.908494 4991 scope.go:117] "RemoveContainer" containerID="dcb00c5b4c1c35b8df494404aa634a297b19d264a00e07d357a2bcb9250dcb25" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.912694 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvc4j"] Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.938330 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" path="/var/lib/kubelet/pods/bc04f5d4-c5e2-42e7-aec1-29717b3d301c/volumes" Sep 29 10:54:32 crc kubenswrapper[4991]: I0929 10:54:32.940633 4991 scope.go:117] "RemoveContainer" containerID="f4e06c0dbca0343a6c4e273dbc400fa859f78f4fd9e331eaf9a5fefadff74cfe" Sep 29 10:54:45 crc kubenswrapper[4991]: I0929 10:54:45.926662 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:54:45 crc kubenswrapper[4991]: E0929 10:54:45.927374 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:54:58 crc kubenswrapper[4991]: I0929 10:54:58.927022 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:54:58 crc kubenswrapper[4991]: E0929 10:54:58.927900 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:55:11 crc kubenswrapper[4991]: I0929 10:55:11.927518 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:55:11 crc kubenswrapper[4991]: E0929 10:55:11.928871 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:55:24 crc kubenswrapper[4991]: I0929 10:55:24.935611 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:55:24 crc kubenswrapper[4991]: E0929 10:55:24.938147 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:55:38 crc kubenswrapper[4991]: I0929 10:55:38.927077 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:55:38 crc kubenswrapper[4991]: E0929 10:55:38.927981 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:55:53 crc kubenswrapper[4991]: I0929 10:55:53.927630 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:55:53 crc kubenswrapper[4991]: E0929 10:55:53.928706 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:56:06 crc kubenswrapper[4991]: I0929 10:56:06.928286 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:56:06 crc kubenswrapper[4991]: E0929 10:56:06.929085 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:56:21 crc kubenswrapper[4991]: I0929 10:56:21.926776 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:56:21 crc kubenswrapper[4991]: E0929 10:56:21.927622 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:56:36 crc kubenswrapper[4991]: I0929 10:56:36.927641 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:56:36 crc kubenswrapper[4991]: E0929 10:56:36.929060 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:56:50 crc kubenswrapper[4991]: I0929 10:56:50.926137 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:56:50 crc kubenswrapper[4991]: E0929 10:56:50.926883 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:57:02 crc kubenswrapper[4991]: I0929 10:57:02.926325 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:57:02 crc kubenswrapper[4991]: E0929 10:57:02.927096 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:57:15 crc kubenswrapper[4991]: I0929 10:57:15.926539 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:57:15 crc kubenswrapper[4991]: E0929 10:57:15.927413 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:57:26 crc kubenswrapper[4991]: I0929 10:57:26.926311 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:57:26 crc kubenswrapper[4991]: E0929 10:57:26.927330 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.004984 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006444 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="extract-content" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006467 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="extract-content" Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006491 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006498 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006525 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006531 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006548 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="extract-utilities" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006556 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="extract-utilities" Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006592 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="extract-utilities" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006600 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="extract-utilities" Sep 29 10:57:28 crc kubenswrapper[4991]: E0929 10:57:28.006624 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="extract-content" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006633 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="extract-content" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006893 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc04f5d4-c5e2-42e7-aec1-29717b3d301c" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.006931 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ef838d-c8f4-4ee5-8035-f21a22cf5e16" containerName="registry-server" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.010020 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.043829 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.046664 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.048879 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrp6q\" (UniqueName: \"kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.107318 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.151466 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.151705 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.151855 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrp6q\" (UniqueName: \"kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.152040 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.152363 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.179036 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrp6q\" (UniqueName: \"kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q\") pod \"certified-operators-lr54g\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.349796 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:28 crc kubenswrapper[4991]: I0929 10:57:28.862408 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:29 crc kubenswrapper[4991]: I0929 10:57:29.736254 4991 generic.go:334] "Generic (PLEG): container finished" podID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerID="f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07" exitCode=0 Sep 29 10:57:29 crc kubenswrapper[4991]: I0929 10:57:29.736372 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerDied","Data":"f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07"} Sep 29 10:57:29 crc kubenswrapper[4991]: I0929 10:57:29.736588 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerStarted","Data":"55ffed1607a4316cbc2177211c279d18046f75062ee4fbad3d53c0081b0a4821"} Sep 29 10:57:29 crc kubenswrapper[4991]: I0929 10:57:29.738996 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 10:57:31 crc kubenswrapper[4991]: I0929 10:57:31.761800 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerStarted","Data":"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7"} Sep 29 10:57:32 crc kubenswrapper[4991]: I0929 10:57:32.772329 4991 generic.go:334] "Generic (PLEG): container finished" podID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerID="423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7" exitCode=0 Sep 29 10:57:32 crc kubenswrapper[4991]: I0929 10:57:32.772398 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerDied","Data":"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7"} Sep 29 10:57:33 crc kubenswrapper[4991]: I0929 10:57:33.783829 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerStarted","Data":"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff"} Sep 29 10:57:33 crc kubenswrapper[4991]: I0929 10:57:33.806858 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr54g" podStartSLOduration=3.289738559 podStartE2EDuration="6.806841372s" podCreationTimestamp="2025-09-29 10:57:27 +0000 UTC" firstStartedPulling="2025-09-29 10:57:29.738674198 +0000 UTC m=+4785.594602226" lastFinishedPulling="2025-09-29 10:57:33.255777011 +0000 UTC m=+4789.111705039" observedRunningTime="2025-09-29 10:57:33.804157692 +0000 UTC m=+4789.660085730" watchObservedRunningTime="2025-09-29 10:57:33.806841372 +0000 UTC m=+4789.662769400" Sep 29 10:57:38 crc kubenswrapper[4991]: I0929 10:57:38.350346 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:38 crc kubenswrapper[4991]: I0929 10:57:38.351281 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:38 crc kubenswrapper[4991]: I0929 10:57:38.403685 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:38 crc kubenswrapper[4991]: I0929 10:57:38.895072 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:38 crc kubenswrapper[4991]: I0929 10:57:38.946327 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:39 crc kubenswrapper[4991]: I0929 10:57:39.926855 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:57:39 crc kubenswrapper[4991]: E0929 10:57:39.927368 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:57:40 crc kubenswrapper[4991]: I0929 10:57:40.885030 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lr54g" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="registry-server" containerID="cri-o://c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff" gracePeriod=2 Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.398483 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.560180 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities\") pod \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.560365 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrp6q\" (UniqueName: \"kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q\") pod \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.560391 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content\") pod \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\" (UID: \"d8b8ffe1-3978-4278-8210-a8c5355b20cd\") " Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.561906 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities" (OuterVolumeSpecName: "utilities") pod "d8b8ffe1-3978-4278-8210-a8c5355b20cd" (UID: "d8b8ffe1-3978-4278-8210-a8c5355b20cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.566214 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q" (OuterVolumeSpecName: "kube-api-access-mrp6q") pod "d8b8ffe1-3978-4278-8210-a8c5355b20cd" (UID: "d8b8ffe1-3978-4278-8210-a8c5355b20cd"). InnerVolumeSpecName "kube-api-access-mrp6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.607190 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8b8ffe1-3978-4278-8210-a8c5355b20cd" (UID: "d8b8ffe1-3978-4278-8210-a8c5355b20cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.663610 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrp6q\" (UniqueName: \"kubernetes.io/projected/d8b8ffe1-3978-4278-8210-a8c5355b20cd-kube-api-access-mrp6q\") on node \"crc\" DevicePath \"\"" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.663655 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.663669 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b8ffe1-3978-4278-8210-a8c5355b20cd-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.897188 4991 generic.go:334] "Generic (PLEG): container finished" podID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerID="c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff" exitCode=0 Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.897244 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerDied","Data":"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff"} Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.897306 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr54g" event={"ID":"d8b8ffe1-3978-4278-8210-a8c5355b20cd","Type":"ContainerDied","Data":"55ffed1607a4316cbc2177211c279d18046f75062ee4fbad3d53c0081b0a4821"} Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.897298 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr54g" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.897355 4991 scope.go:117] "RemoveContainer" containerID="c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.926267 4991 scope.go:117] "RemoveContainer" containerID="423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7" Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.930975 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.944515 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lr54g"] Sep 29 10:57:41 crc kubenswrapper[4991]: I0929 10:57:41.953560 4991 scope.go:117] "RemoveContainer" containerID="f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.017852 4991 scope.go:117] "RemoveContainer" containerID="c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff" Sep 29 10:57:42 crc kubenswrapper[4991]: E0929 10:57:42.018413 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff\": container with ID starting with c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff not found: ID does not exist" containerID="c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.018482 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff"} err="failed to get container status \"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff\": rpc error: code = NotFound desc = could not find container \"c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff\": container with ID starting with c8e1d205f33b83965a3f1593554083583604bc5ce30a8e5061f29b33343ec8ff not found: ID does not exist" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.018522 4991 scope.go:117] "RemoveContainer" containerID="423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7" Sep 29 10:57:42 crc kubenswrapper[4991]: E0929 10:57:42.018887 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7\": container with ID starting with 423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7 not found: ID does not exist" containerID="423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.018926 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7"} err="failed to get container status \"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7\": rpc error: code = NotFound desc = could not find container \"423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7\": container with ID starting with 423ca8666a8e35d8633f962c518aaf6ff80136c21038db3847de09cd96da70a7 not found: ID does not exist" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.018968 4991 scope.go:117] "RemoveContainer" containerID="f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07" Sep 29 10:57:42 crc kubenswrapper[4991]: E0929 10:57:42.019286 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07\": container with ID starting with f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07 not found: ID does not exist" containerID="f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.019315 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07"} err="failed to get container status \"f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07\": rpc error: code = NotFound desc = could not find container \"f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07\": container with ID starting with f428f781021fcd3f3a7d30e28a72db7060b185ab77dc8b35084047bb6ccf6a07 not found: ID does not exist" Sep 29 10:57:42 crc kubenswrapper[4991]: I0929 10:57:42.941214 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" path="/var/lib/kubelet/pods/d8b8ffe1-3978-4278-8210-a8c5355b20cd/volumes" Sep 29 10:57:54 crc kubenswrapper[4991]: I0929 10:57:54.947473 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:57:54 crc kubenswrapper[4991]: E0929 10:57:54.948925 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:58:05 crc kubenswrapper[4991]: I0929 10:58:05.926870 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:58:05 crc kubenswrapper[4991]: E0929 10:58:05.927845 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:58:19 crc kubenswrapper[4991]: I0929 10:58:19.926406 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:58:19 crc kubenswrapper[4991]: E0929 10:58:19.927105 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:58:34 crc kubenswrapper[4991]: I0929 10:58:34.940932 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:58:34 crc kubenswrapper[4991]: E0929 10:58:34.943335 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 10:58:46 crc kubenswrapper[4991]: I0929 10:58:46.927823 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 10:58:47 crc kubenswrapper[4991]: I0929 10:58:47.741574 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b"} Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.152819 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7"] Sep 29 11:00:00 crc kubenswrapper[4991]: E0929 11:00:00.153926 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="extract-utilities" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.153941 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="extract-utilities" Sep 29 11:00:00 crc kubenswrapper[4991]: E0929 11:00:00.153981 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="extract-content" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.153989 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="extract-content" Sep 29 11:00:00 crc kubenswrapper[4991]: E0929 11:00:00.154023 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="registry-server" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.154029 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="registry-server" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.154288 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b8ffe1-3978-4278-8210-a8c5355b20cd" containerName="registry-server" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.155184 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.159612 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.159800 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.184221 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7"] Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.324871 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4zd\" (UniqueName: \"kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.325256 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.325643 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.428017 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.428187 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj4zd\" (UniqueName: \"kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.428238 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.429556 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.433982 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.448819 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj4zd\" (UniqueName: \"kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd\") pod \"collect-profiles-29319060-hxxx7\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.487312 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:00 crc kubenswrapper[4991]: I0929 11:00:00.972912 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7"] Sep 29 11:00:01 crc kubenswrapper[4991]: I0929 11:00:01.568357 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" event={"ID":"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d","Type":"ContainerStarted","Data":"16b69310fa1a06ff68020ad8d4cfa9a539f774f30fb2bfa29e5f653a9f760ed1"} Sep 29 11:00:01 crc kubenswrapper[4991]: I0929 11:00:01.568690 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" event={"ID":"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d","Type":"ContainerStarted","Data":"03419838f2765c1d85c8ac4626a0b77addb4b4b31b641c1eec8d4885b3e7775e"} Sep 29 11:00:01 crc kubenswrapper[4991]: I0929 11:00:01.599664 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" podStartSLOduration=1.599646303 podStartE2EDuration="1.599646303s" podCreationTimestamp="2025-09-29 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 11:00:01.584439585 +0000 UTC m=+4937.440367623" watchObservedRunningTime="2025-09-29 11:00:01.599646303 +0000 UTC m=+4937.455574331" Sep 29 11:00:02 crc kubenswrapper[4991]: I0929 11:00:02.580298 4991 generic.go:334] "Generic (PLEG): container finished" podID="59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" containerID="16b69310fa1a06ff68020ad8d4cfa9a539f774f30fb2bfa29e5f653a9f760ed1" exitCode=0 Sep 29 11:00:02 crc kubenswrapper[4991]: I0929 11:00:02.580400 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" event={"ID":"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d","Type":"ContainerDied","Data":"16b69310fa1a06ff68020ad8d4cfa9a539f774f30fb2bfa29e5f653a9f760ed1"} Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.215159 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.320642 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj4zd\" (UniqueName: \"kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd\") pod \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.320762 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume\") pod \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.321003 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume\") pod \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\" (UID: \"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d\") " Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.323143 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" (UID: "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.327510 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" (UID: "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.327737 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd" (OuterVolumeSpecName: "kube-api-access-qj4zd") pod "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" (UID: "59ea0ca7-8b75-4d88-bfce-dbc64c637e0d"). InnerVolumeSpecName "kube-api-access-qj4zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.423199 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.423502 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.423589 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj4zd\" (UniqueName: \"kubernetes.io/projected/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d-kube-api-access-qj4zd\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.609499 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" event={"ID":"59ea0ca7-8b75-4d88-bfce-dbc64c637e0d","Type":"ContainerDied","Data":"03419838f2765c1d85c8ac4626a0b77addb4b4b31b641c1eec8d4885b3e7775e"} Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.609549 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03419838f2765c1d85c8ac4626a0b77addb4b4b31b641c1eec8d4885b3e7775e" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.609579 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7" Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.657236 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87"] Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.666984 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319015-qxx87"] Sep 29 11:00:04 crc kubenswrapper[4991]: I0929 11:00:04.941286 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5540731-96f4-41ac-8338-046339de8fb6" path="/var/lib/kubelet/pods/f5540731-96f4-41ac-8338-046339de8fb6/volumes" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.373750 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:20 crc kubenswrapper[4991]: E0929 11:00:20.376129 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" containerName="collect-profiles" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.376150 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" containerName="collect-profiles" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.376438 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" containerName="collect-profiles" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.380161 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.390021 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.517653 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cjm\" (UniqueName: \"kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.517705 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.517912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.620737 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.621195 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cjm\" (UniqueName: \"kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.621225 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.621419 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.621710 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.644717 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cjm\" (UniqueName: \"kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm\") pod \"redhat-marketplace-phq4h\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:20 crc kubenswrapper[4991]: I0929 11:00:20.718740 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:21 crc kubenswrapper[4991]: I0929 11:00:21.273194 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:21 crc kubenswrapper[4991]: I0929 11:00:21.795449 4991 generic.go:334] "Generic (PLEG): container finished" podID="8027af1e-12e1-476d-8fb6-670989ece336" containerID="2fbec38e4f8e3e291fd93299131e0abb2512f166b0ffb6e5f8f756d5883e8869" exitCode=0 Sep 29 11:00:21 crc kubenswrapper[4991]: I0929 11:00:21.795513 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerDied","Data":"2fbec38e4f8e3e291fd93299131e0abb2512f166b0ffb6e5f8f756d5883e8869"} Sep 29 11:00:21 crc kubenswrapper[4991]: I0929 11:00:21.795739 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerStarted","Data":"4984e2cafe370f4a7248c07b409dfc62d0622a860085d2af088ee00a4e9b890b"} Sep 29 11:00:22 crc kubenswrapper[4991]: I0929 11:00:22.808066 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerStarted","Data":"db531d89acf5a66c88baaa728816802a2eceda05bff00db270d9648424cdc0c3"} Sep 29 11:00:23 crc kubenswrapper[4991]: I0929 11:00:23.819823 4991 generic.go:334] "Generic (PLEG): container finished" podID="8027af1e-12e1-476d-8fb6-670989ece336" containerID="db531d89acf5a66c88baaa728816802a2eceda05bff00db270d9648424cdc0c3" exitCode=0 Sep 29 11:00:23 crc kubenswrapper[4991]: I0929 11:00:23.819893 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerDied","Data":"db531d89acf5a66c88baaa728816802a2eceda05bff00db270d9648424cdc0c3"} Sep 29 11:00:24 crc kubenswrapper[4991]: I0929 11:00:24.564735 4991 scope.go:117] "RemoveContainer" containerID="e1b5cba0995b536ad998cb52c46fe5bea21e7106da62f89a4c2a27464ea412e7" Sep 29 11:00:24 crc kubenswrapper[4991]: I0929 11:00:24.835061 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerStarted","Data":"8b916eb9480df776cb993ee9f2e3a0f7e79472646d1e5695bd278cceb7288f2e"} Sep 29 11:00:24 crc kubenswrapper[4991]: I0929 11:00:24.862791 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phq4h" podStartSLOduration=2.395318463 podStartE2EDuration="4.862770329s" podCreationTimestamp="2025-09-29 11:00:20 +0000 UTC" firstStartedPulling="2025-09-29 11:00:21.797536709 +0000 UTC m=+4957.653464737" lastFinishedPulling="2025-09-29 11:00:24.264988575 +0000 UTC m=+4960.120916603" observedRunningTime="2025-09-29 11:00:24.854849792 +0000 UTC m=+4960.710777820" watchObservedRunningTime="2025-09-29 11:00:24.862770329 +0000 UTC m=+4960.718698357" Sep 29 11:00:30 crc kubenswrapper[4991]: I0929 11:00:30.718935 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:30 crc kubenswrapper[4991]: I0929 11:00:30.719502 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:30 crc kubenswrapper[4991]: I0929 11:00:30.776968 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:30 crc kubenswrapper[4991]: I0929 11:00:30.955144 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:31 crc kubenswrapper[4991]: I0929 11:00:31.016614 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:32 crc kubenswrapper[4991]: I0929 11:00:32.917309 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phq4h" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="registry-server" containerID="cri-o://8b916eb9480df776cb993ee9f2e3a0f7e79472646d1e5695bd278cceb7288f2e" gracePeriod=2 Sep 29 11:00:33 crc kubenswrapper[4991]: I0929 11:00:33.934820 4991 generic.go:334] "Generic (PLEG): container finished" podID="8027af1e-12e1-476d-8fb6-670989ece336" containerID="8b916eb9480df776cb993ee9f2e3a0f7e79472646d1e5695bd278cceb7288f2e" exitCode=0 Sep 29 11:00:33 crc kubenswrapper[4991]: I0929 11:00:33.934883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerDied","Data":"8b916eb9480df776cb993ee9f2e3a0f7e79472646d1e5695bd278cceb7288f2e"} Sep 29 11:00:33 crc kubenswrapper[4991]: I0929 11:00:33.935249 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq4h" event={"ID":"8027af1e-12e1-476d-8fb6-670989ece336","Type":"ContainerDied","Data":"4984e2cafe370f4a7248c07b409dfc62d0622a860085d2af088ee00a4e9b890b"} Sep 29 11:00:33 crc kubenswrapper[4991]: I0929 11:00:33.935273 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4984e2cafe370f4a7248c07b409dfc62d0622a860085d2af088ee00a4e9b890b" Sep 29 11:00:33 crc kubenswrapper[4991]: I0929 11:00:33.962788 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.053987 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities\") pod \"8027af1e-12e1-476d-8fb6-670989ece336\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.054171 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cjm\" (UniqueName: \"kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm\") pod \"8027af1e-12e1-476d-8fb6-670989ece336\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.054228 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content\") pod \"8027af1e-12e1-476d-8fb6-670989ece336\" (UID: \"8027af1e-12e1-476d-8fb6-670989ece336\") " Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.055095 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities" (OuterVolumeSpecName: "utilities") pod "8027af1e-12e1-476d-8fb6-670989ece336" (UID: "8027af1e-12e1-476d-8fb6-670989ece336"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.055567 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.062893 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm" (OuterVolumeSpecName: "kube-api-access-24cjm") pod "8027af1e-12e1-476d-8fb6-670989ece336" (UID: "8027af1e-12e1-476d-8fb6-670989ece336"). InnerVolumeSpecName "kube-api-access-24cjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.069315 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8027af1e-12e1-476d-8fb6-670989ece336" (UID: "8027af1e-12e1-476d-8fb6-670989ece336"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.157690 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24cjm\" (UniqueName: \"kubernetes.io/projected/8027af1e-12e1-476d-8fb6-670989ece336-kube-api-access-24cjm\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.157730 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8027af1e-12e1-476d-8fb6-670989ece336-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.942744 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq4h" Sep 29 11:00:34 crc kubenswrapper[4991]: I0929 11:00:34.990478 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:35 crc kubenswrapper[4991]: I0929 11:00:35.002449 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq4h"] Sep 29 11:00:36 crc kubenswrapper[4991]: I0929 11:00:36.955706 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8027af1e-12e1-476d-8fb6-670989ece336" path="/var/lib/kubelet/pods/8027af1e-12e1-476d-8fb6-670989ece336/volumes" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.163804 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29319061-swvvf"] Sep 29 11:01:00 crc kubenswrapper[4991]: E0929 11:01:00.164975 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="extract-utilities" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.164996 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="extract-utilities" Sep 29 11:01:00 crc kubenswrapper[4991]: E0929 11:01:00.165006 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="registry-server" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.165012 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="registry-server" Sep 29 11:01:00 crc kubenswrapper[4991]: E0929 11:01:00.165044 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="extract-content" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.165051 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="extract-content" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.165296 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8027af1e-12e1-476d-8fb6-670989ece336" containerName="registry-server" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.166332 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.175720 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319061-swvvf"] Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.296652 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.297189 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.297346 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbsk\" (UniqueName: \"kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.297618 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.400236 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.400380 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.400522 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.400562 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vbsk\" (UniqueName: \"kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.407077 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.407220 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.407640 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.415599 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vbsk\" (UniqueName: \"kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk\") pod \"keystone-cron-29319061-swvvf\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.496563 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:00 crc kubenswrapper[4991]: I0929 11:01:00.975066 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319061-swvvf"] Sep 29 11:01:01 crc kubenswrapper[4991]: I0929 11:01:01.253722 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319061-swvvf" event={"ID":"b44ec6ab-d0b6-42c5-9c64-48e245f4241c","Type":"ContainerStarted","Data":"50b49c3a0a59d92918f573039ce3afa7c27e7a7acfe928a81dac2ee40b9ef3a1"} Sep 29 11:01:01 crc kubenswrapper[4991]: I0929 11:01:01.254053 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319061-swvvf" event={"ID":"b44ec6ab-d0b6-42c5-9c64-48e245f4241c","Type":"ContainerStarted","Data":"7de4734cd9647e92b7af68dbc0085c8068d22c44a6bec3642638de52b4d92efa"} Sep 29 11:01:01 crc kubenswrapper[4991]: I0929 11:01:01.278830 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29319061-swvvf" podStartSLOduration=1.278811383 podStartE2EDuration="1.278811383s" podCreationTimestamp="2025-09-29 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 11:01:01.266279236 +0000 UTC m=+4997.122207274" watchObservedRunningTime="2025-09-29 11:01:01.278811383 +0000 UTC m=+4997.134739411" Sep 29 11:01:07 crc kubenswrapper[4991]: I0929 11:01:07.322210 4991 generic.go:334] "Generic (PLEG): container finished" podID="b44ec6ab-d0b6-42c5-9c64-48e245f4241c" containerID="50b49c3a0a59d92918f573039ce3afa7c27e7a7acfe928a81dac2ee40b9ef3a1" exitCode=0 Sep 29 11:01:07 crc kubenswrapper[4991]: I0929 11:01:07.322330 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319061-swvvf" event={"ID":"b44ec6ab-d0b6-42c5-9c64-48e245f4241c","Type":"ContainerDied","Data":"50b49c3a0a59d92918f573039ce3afa7c27e7a7acfe928a81dac2ee40b9ef3a1"} Sep 29 11:01:07 crc kubenswrapper[4991]: I0929 11:01:07.946389 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:01:07 crc kubenswrapper[4991]: I0929 11:01:07.946767 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.778032 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.947415 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle\") pod \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.947555 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys\") pod \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.947650 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vbsk\" (UniqueName: \"kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk\") pod \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.947811 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data\") pod \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\" (UID: \"b44ec6ab-d0b6-42c5-9c64-48e245f4241c\") " Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.956596 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b44ec6ab-d0b6-42c5-9c64-48e245f4241c" (UID: "b44ec6ab-d0b6-42c5-9c64-48e245f4241c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.958568 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk" (OuterVolumeSpecName: "kube-api-access-2vbsk") pod "b44ec6ab-d0b6-42c5-9c64-48e245f4241c" (UID: "b44ec6ab-d0b6-42c5-9c64-48e245f4241c"). InnerVolumeSpecName "kube-api-access-2vbsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:01:08 crc kubenswrapper[4991]: I0929 11:01:08.991312 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b44ec6ab-d0b6-42c5-9c64-48e245f4241c" (UID: "b44ec6ab-d0b6-42c5-9c64-48e245f4241c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.017120 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data" (OuterVolumeSpecName: "config-data") pod "b44ec6ab-d0b6-42c5-9c64-48e245f4241c" (UID: "b44ec6ab-d0b6-42c5-9c64-48e245f4241c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.052348 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.052410 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.052429 4991 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.052441 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vbsk\" (UniqueName: \"kubernetes.io/projected/b44ec6ab-d0b6-42c5-9c64-48e245f4241c-kube-api-access-2vbsk\") on node \"crc\" DevicePath \"\"" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.345665 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319061-swvvf" event={"ID":"b44ec6ab-d0b6-42c5-9c64-48e245f4241c","Type":"ContainerDied","Data":"7de4734cd9647e92b7af68dbc0085c8068d22c44a6bec3642638de52b4d92efa"} Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.346016 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7de4734cd9647e92b7af68dbc0085c8068d22c44a6bec3642638de52b4d92efa" Sep 29 11:01:09 crc kubenswrapper[4991]: I0929 11:01:09.345707 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319061-swvvf" Sep 29 11:01:37 crc kubenswrapper[4991]: I0929 11:01:37.946398 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:01:37 crc kubenswrapper[4991]: I0929 11:01:37.946932 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:02:07 crc kubenswrapper[4991]: I0929 11:02:07.947383 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:02:07 crc kubenswrapper[4991]: I0929 11:02:07.947890 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:02:07 crc kubenswrapper[4991]: I0929 11:02:07.947929 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:02:07 crc kubenswrapper[4991]: I0929 11:02:07.948832 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:02:07 crc kubenswrapper[4991]: I0929 11:02:07.948891 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b" gracePeriod=600 Sep 29 11:02:08 crc kubenswrapper[4991]: I0929 11:02:08.966143 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b" exitCode=0 Sep 29 11:02:08 crc kubenswrapper[4991]: I0929 11:02:08.966227 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b"} Sep 29 11:02:08 crc kubenswrapper[4991]: I0929 11:02:08.967016 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5"} Sep 29 11:02:08 crc kubenswrapper[4991]: I0929 11:02:08.967044 4991 scope.go:117] "RemoveContainer" containerID="5a47be10e1fbcf3ffd6ded8587810fef65c9bfbd56427ba30c779de9831687d3" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.386926 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:30 crc kubenswrapper[4991]: E0929 11:04:30.388238 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44ec6ab-d0b6-42c5-9c64-48e245f4241c" containerName="keystone-cron" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.388259 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44ec6ab-d0b6-42c5-9c64-48e245f4241c" containerName="keystone-cron" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.388620 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44ec6ab-d0b6-42c5-9c64-48e245f4241c" containerName="keystone-cron" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.390931 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.397304 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.473508 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.474175 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.474261 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.576848 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.576911 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.577073 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.577641 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.577666 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:30 crc kubenswrapper[4991]: I0929 11:04:30.964208 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt\") pod \"community-operators-c4752\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:31 crc kubenswrapper[4991]: I0929 11:04:31.029161 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:31 crc kubenswrapper[4991]: I0929 11:04:31.631363 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:32 crc kubenswrapper[4991]: I0929 11:04:32.537525 4991 generic.go:334] "Generic (PLEG): container finished" podID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerID="af6e27d5203cecc06aaae97f03e49b2ea03021cd5bd19a0d3c06f6e0dec4e7bb" exitCode=0 Sep 29 11:04:32 crc kubenswrapper[4991]: I0929 11:04:32.537655 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerDied","Data":"af6e27d5203cecc06aaae97f03e49b2ea03021cd5bd19a0d3c06f6e0dec4e7bb"} Sep 29 11:04:32 crc kubenswrapper[4991]: I0929 11:04:32.537811 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerStarted","Data":"97761771f8b913803942a873588f39b4006cb06a9c4dc17d8fac04e1d5498706"} Sep 29 11:04:32 crc kubenswrapper[4991]: I0929 11:04:32.539553 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:04:33 crc kubenswrapper[4991]: I0929 11:04:33.553445 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerStarted","Data":"3a9392483164a9fd1122708a0d1a9fd2d70ac03d25214cf7579491898ca68894"} Sep 29 11:04:34 crc kubenswrapper[4991]: I0929 11:04:34.566739 4991 generic.go:334] "Generic (PLEG): container finished" podID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerID="3a9392483164a9fd1122708a0d1a9fd2d70ac03d25214cf7579491898ca68894" exitCode=0 Sep 29 11:04:34 crc kubenswrapper[4991]: I0929 11:04:34.567094 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerDied","Data":"3a9392483164a9fd1122708a0d1a9fd2d70ac03d25214cf7579491898ca68894"} Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.581702 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerStarted","Data":"bbe8c4ceee1401c4d7966a009990df8d6ad3980f2a9ec7a4dde29a09af35744e"} Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.584756 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.587819 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.603805 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.622424 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4752" podStartSLOduration=2.816740232 podStartE2EDuration="5.622377892s" podCreationTimestamp="2025-09-29 11:04:30 +0000 UTC" firstStartedPulling="2025-09-29 11:04:32.539301728 +0000 UTC m=+5208.395229756" lastFinishedPulling="2025-09-29 11:04:35.344939388 +0000 UTC m=+5211.200867416" observedRunningTime="2025-09-29 11:04:35.620696098 +0000 UTC m=+5211.476624126" watchObservedRunningTime="2025-09-29 11:04:35.622377892 +0000 UTC m=+5211.478305920" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.702942 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.703147 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl8zr\" (UniqueName: \"kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.703195 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.805415 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.805537 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl8zr\" (UniqueName: \"kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.805563 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.805848 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.805925 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.832843 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl8zr\" (UniqueName: \"kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr\") pod \"redhat-operators-w72tb\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:35 crc kubenswrapper[4991]: I0929 11:04:35.908647 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:04:36 crc kubenswrapper[4991]: I0929 11:04:36.414178 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 11:04:36 crc kubenswrapper[4991]: I0929 11:04:36.592020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerStarted","Data":"99d33d4e462a08c272ad394214a1eba90eae75da95c42aaf7881a14284c4e913"} Sep 29 11:04:37 crc kubenswrapper[4991]: I0929 11:04:37.603165 4991 generic.go:334] "Generic (PLEG): container finished" podID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerID="bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a" exitCode=0 Sep 29 11:04:37 crc kubenswrapper[4991]: I0929 11:04:37.603223 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerDied","Data":"bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a"} Sep 29 11:04:37 crc kubenswrapper[4991]: I0929 11:04:37.946680 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:04:37 crc kubenswrapper[4991]: I0929 11:04:37.946744 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:04:41 crc kubenswrapper[4991]: I0929 11:04:41.029617 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:41 crc kubenswrapper[4991]: I0929 11:04:41.030247 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:41 crc kubenswrapper[4991]: I0929 11:04:41.084602 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:42 crc kubenswrapper[4991]: I0929 11:04:42.319289 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:43 crc kubenswrapper[4991]: I0929 11:04:43.376770 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:43 crc kubenswrapper[4991]: I0929 11:04:43.690286 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4752" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="registry-server" containerID="cri-o://bbe8c4ceee1401c4d7966a009990df8d6ad3980f2a9ec7a4dde29a09af35744e" gracePeriod=2 Sep 29 11:04:44 crc kubenswrapper[4991]: I0929 11:04:44.702084 4991 generic.go:334] "Generic (PLEG): container finished" podID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerID="bbe8c4ceee1401c4d7966a009990df8d6ad3980f2a9ec7a4dde29a09af35744e" exitCode=0 Sep 29 11:04:44 crc kubenswrapper[4991]: I0929 11:04:44.702170 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerDied","Data":"bbe8c4ceee1401c4d7966a009990df8d6ad3980f2a9ec7a4dde29a09af35744e"} Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.409353 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.567914 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities\") pod \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.568216 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt\") pod \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.568282 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content\") pod \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\" (UID: \"2cbcba1f-f988-4f6f-9fce-1ef60a331e41\") " Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.568816 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities" (OuterVolumeSpecName: "utilities") pod "2cbcba1f-f988-4f6f-9fce-1ef60a331e41" (UID: "2cbcba1f-f988-4f6f-9fce-1ef60a331e41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.569157 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.574314 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt" (OuterVolumeSpecName: "kube-api-access-kj6lt") pod "2cbcba1f-f988-4f6f-9fce-1ef60a331e41" (UID: "2cbcba1f-f988-4f6f-9fce-1ef60a331e41"). InnerVolumeSpecName "kube-api-access-kj6lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.614725 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cbcba1f-f988-4f6f-9fce-1ef60a331e41" (UID: "2cbcba1f-f988-4f6f-9fce-1ef60a331e41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.671996 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6lt\" (UniqueName: \"kubernetes.io/projected/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-kube-api-access-kj6lt\") on node \"crc\" DevicePath \"\"" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.672042 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cbcba1f-f988-4f6f-9fce-1ef60a331e41-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.758731 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4752" event={"ID":"2cbcba1f-f988-4f6f-9fce-1ef60a331e41","Type":"ContainerDied","Data":"97761771f8b913803942a873588f39b4006cb06a9c4dc17d8fac04e1d5498706"} Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.758786 4991 scope.go:117] "RemoveContainer" containerID="bbe8c4ceee1401c4d7966a009990df8d6ad3980f2a9ec7a4dde29a09af35744e" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.758800 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4752" Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.799096 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:49 crc kubenswrapper[4991]: I0929 11:04:49.808518 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4752"] Sep 29 11:04:50 crc kubenswrapper[4991]: I0929 11:04:50.267097 4991 scope.go:117] "RemoveContainer" containerID="3a9392483164a9fd1122708a0d1a9fd2d70ac03d25214cf7579491898ca68894" Sep 29 11:04:50 crc kubenswrapper[4991]: I0929 11:04:50.296690 4991 scope.go:117] "RemoveContainer" containerID="af6e27d5203cecc06aaae97f03e49b2ea03021cd5bd19a0d3c06f6e0dec4e7bb" Sep 29 11:04:50 crc kubenswrapper[4991]: I0929 11:04:50.780303 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerStarted","Data":"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75"} Sep 29 11:04:50 crc kubenswrapper[4991]: I0929 11:04:50.939572 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" path="/var/lib/kubelet/pods/2cbcba1f-f988-4f6f-9fce-1ef60a331e41/volumes" Sep 29 11:04:51 crc kubenswrapper[4991]: I0929 11:04:51.801095 4991 generic.go:334] "Generic (PLEG): container finished" podID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerID="31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75" exitCode=0 Sep 29 11:04:51 crc kubenswrapper[4991]: I0929 11:04:51.801157 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerDied","Data":"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75"} Sep 29 11:04:56 crc kubenswrapper[4991]: I0929 11:04:56.852444 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerStarted","Data":"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd"} Sep 29 11:04:56 crc kubenswrapper[4991]: I0929 11:04:56.875630 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w72tb" podStartSLOduration=4.03542618 podStartE2EDuration="21.875610051s" podCreationTimestamp="2025-09-29 11:04:35 +0000 UTC" firstStartedPulling="2025-09-29 11:04:37.605021456 +0000 UTC m=+5213.460949484" lastFinishedPulling="2025-09-29 11:04:55.445205317 +0000 UTC m=+5231.301133355" observedRunningTime="2025-09-29 11:04:56.86904767 +0000 UTC m=+5232.724975718" watchObservedRunningTime="2025-09-29 11:04:56.875610051 +0000 UTC m=+5232.731538089" Sep 29 11:05:05 crc kubenswrapper[4991]: I0929 11:05:05.908774 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:05:05 crc kubenswrapper[4991]: I0929 11:05:05.909284 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:05:05 crc kubenswrapper[4991]: I0929 11:05:05.957802 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.034809 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.684698 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.786018 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.786284 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f7wps" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="registry-server" containerID="cri-o://741b0b113b6351638664a8795c18ca3462f236811998379f909a3f00e2c8a65a" gracePeriod=2 Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.994204 4991 generic.go:334] "Generic (PLEG): container finished" podID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerID="741b0b113b6351638664a8795c18ca3462f236811998379f909a3f00e2c8a65a" exitCode=0 Sep 29 11:05:06 crc kubenswrapper[4991]: I0929 11:05:06.995789 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerDied","Data":"741b0b113b6351638664a8795c18ca3462f236811998379f909a3f00e2c8a65a"} Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.342528 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.523371 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdvs\" (UniqueName: \"kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs\") pod \"a27f8f80-22e1-441b-a095-ef8d4ad55629\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.523441 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities\") pod \"a27f8f80-22e1-441b-a095-ef8d4ad55629\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.523603 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content\") pod \"a27f8f80-22e1-441b-a095-ef8d4ad55629\" (UID: \"a27f8f80-22e1-441b-a095-ef8d4ad55629\") " Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.524568 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities" (OuterVolumeSpecName: "utilities") pod "a27f8f80-22e1-441b-a095-ef8d4ad55629" (UID: "a27f8f80-22e1-441b-a095-ef8d4ad55629"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.532247 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs" (OuterVolumeSpecName: "kube-api-access-4cdvs") pod "a27f8f80-22e1-441b-a095-ef8d4ad55629" (UID: "a27f8f80-22e1-441b-a095-ef8d4ad55629"). InnerVolumeSpecName "kube-api-access-4cdvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.600360 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a27f8f80-22e1-441b-a095-ef8d4ad55629" (UID: "a27f8f80-22e1-441b-a095-ef8d4ad55629"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.628168 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.628215 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdvs\" (UniqueName: \"kubernetes.io/projected/a27f8f80-22e1-441b-a095-ef8d4ad55629-kube-api-access-4cdvs\") on node \"crc\" DevicePath \"\"" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.628226 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a27f8f80-22e1-441b-a095-ef8d4ad55629-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.946461 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:05:07 crc kubenswrapper[4991]: I0929 11:05:07.946528 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.005100 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f7wps" event={"ID":"a27f8f80-22e1-441b-a095-ef8d4ad55629","Type":"ContainerDied","Data":"67f736e3a8031dff534dcebe588ec8791e763fcaad9ef1d092155f8eeb1af263"} Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.005211 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f7wps" Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.005521 4991 scope.go:117] "RemoveContainer" containerID="741b0b113b6351638664a8795c18ca3462f236811998379f909a3f00e2c8a65a" Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.038518 4991 scope.go:117] "RemoveContainer" containerID="a5173133cca24364a7d7d4c3dedf8008232b8d62eaaf095bcf3367516fb9fb44" Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.057512 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.071330 4991 scope.go:117] "RemoveContainer" containerID="59b70974030a6d9da26ae8efa1086f71dd2a5cb211bb5436edec93faa8f12a03" Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.077573 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f7wps"] Sep 29 11:05:08 crc kubenswrapper[4991]: I0929 11:05:08.939430 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" path="/var/lib/kubelet/pods/a27f8f80-22e1-441b-a095-ef8d4ad55629/volumes" Sep 29 11:05:37 crc kubenswrapper[4991]: I0929 11:05:37.947478 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:05:37 crc kubenswrapper[4991]: I0929 11:05:37.948156 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:05:37 crc kubenswrapper[4991]: I0929 11:05:37.948212 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:05:37 crc kubenswrapper[4991]: I0929 11:05:37.949308 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:05:37 crc kubenswrapper[4991]: I0929 11:05:37.949382 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" gracePeriod=600 Sep 29 11:05:38 crc kubenswrapper[4991]: E0929 11:05:38.081171 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:05:38 crc kubenswrapper[4991]: I0929 11:05:38.414627 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" exitCode=0 Sep 29 11:05:38 crc kubenswrapper[4991]: I0929 11:05:38.414669 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5"} Sep 29 11:05:38 crc kubenswrapper[4991]: I0929 11:05:38.414703 4991 scope.go:117] "RemoveContainer" containerID="99cc4920808487dac896e957b7d6586964409972173a7482035091bf4575296b" Sep 29 11:05:38 crc kubenswrapper[4991]: I0929 11:05:38.415555 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:05:38 crc kubenswrapper[4991]: E0929 11:05:38.416020 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:05:50 crc kubenswrapper[4991]: I0929 11:05:50.927122 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:05:50 crc kubenswrapper[4991]: E0929 11:05:50.927993 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:06:02 crc kubenswrapper[4991]: I0929 11:06:02.926695 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:06:02 crc kubenswrapper[4991]: E0929 11:06:02.927540 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:06:13 crc kubenswrapper[4991]: I0929 11:06:13.926544 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:06:13 crc kubenswrapper[4991]: E0929 11:06:13.927414 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:06:24 crc kubenswrapper[4991]: I0929 11:06:24.766479 4991 scope.go:117] "RemoveContainer" containerID="db531d89acf5a66c88baaa728816802a2eceda05bff00db270d9648424cdc0c3" Sep 29 11:06:25 crc kubenswrapper[4991]: I0929 11:06:25.176726 4991 scope.go:117] "RemoveContainer" containerID="2fbec38e4f8e3e291fd93299131e0abb2512f166b0ffb6e5f8f756d5883e8869" Sep 29 11:06:25 crc kubenswrapper[4991]: I0929 11:06:25.249456 4991 scope.go:117] "RemoveContainer" containerID="8b916eb9480df776cb993ee9f2e3a0f7e79472646d1e5695bd278cceb7288f2e" Sep 29 11:06:25 crc kubenswrapper[4991]: I0929 11:06:25.927661 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:06:25 crc kubenswrapper[4991]: E0929 11:06:25.928586 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:06:37 crc kubenswrapper[4991]: I0929 11:06:37.926365 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:06:37 crc kubenswrapper[4991]: E0929 11:06:37.927362 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:06:51 crc kubenswrapper[4991]: I0929 11:06:51.927350 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:06:51 crc kubenswrapper[4991]: E0929 11:06:51.928059 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:07:04 crc kubenswrapper[4991]: I0929 11:07:04.942381 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:07:04 crc kubenswrapper[4991]: E0929 11:07:04.943140 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:07:17 crc kubenswrapper[4991]: I0929 11:07:17.926654 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:07:17 crc kubenswrapper[4991]: E0929 11:07:17.927791 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:07:32 crc kubenswrapper[4991]: I0929 11:07:32.926662 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:07:32 crc kubenswrapper[4991]: E0929 11:07:32.927621 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:07:47 crc kubenswrapper[4991]: I0929 11:07:47.926687 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:07:47 crc kubenswrapper[4991]: E0929 11:07:47.927592 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:08:01 crc kubenswrapper[4991]: I0929 11:08:01.930825 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:08:01 crc kubenswrapper[4991]: E0929 11:08:01.931561 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:08:12 crc kubenswrapper[4991]: I0929 11:08:12.926773 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:08:12 crc kubenswrapper[4991]: E0929 11:08:12.927542 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:08:23 crc kubenswrapper[4991]: I0929 11:08:23.927289 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:08:23 crc kubenswrapper[4991]: E0929 11:08:23.928633 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:08:38 crc kubenswrapper[4991]: I0929 11:08:38.926884 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:08:38 crc kubenswrapper[4991]: E0929 11:08:38.927632 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:08:53 crc kubenswrapper[4991]: I0929 11:08:53.926411 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:08:53 crc kubenswrapper[4991]: E0929 11:08:53.927161 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:09:08 crc kubenswrapper[4991]: I0929 11:09:08.926297 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:09:08 crc kubenswrapper[4991]: E0929 11:09:08.927073 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:09:22 crc kubenswrapper[4991]: I0929 11:09:22.926295 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:09:22 crc kubenswrapper[4991]: E0929 11:09:22.928450 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:09:36 crc kubenswrapper[4991]: I0929 11:09:36.929805 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:09:36 crc kubenswrapper[4991]: E0929 11:09:36.930643 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:09:49 crc kubenswrapper[4991]: I0929 11:09:49.926716 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:09:49 crc kubenswrapper[4991]: E0929 11:09:49.927776 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:10:00 crc kubenswrapper[4991]: I0929 11:10:00.928477 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:10:00 crc kubenswrapper[4991]: E0929 11:10:00.929428 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:10:13 crc kubenswrapper[4991]: I0929 11:10:13.927045 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:10:13 crc kubenswrapper[4991]: E0929 11:10:13.928243 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:10:24 crc kubenswrapper[4991]: I0929 11:10:24.948052 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:10:24 crc kubenswrapper[4991]: E0929 11:10:24.949015 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:10:39 crc kubenswrapper[4991]: I0929 11:10:39.926549 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:10:40 crc kubenswrapper[4991]: I0929 11:10:40.886908 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e"} Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.570570 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571614 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571629 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571670 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571677 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571700 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="extract-utilities" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571707 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="extract-utilities" Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571720 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="extract-utilities" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571726 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="extract-utilities" Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571733 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="extract-content" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571740 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="extract-content" Sep 29 11:12:55 crc kubenswrapper[4991]: E0929 11:12:55.571768 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="extract-content" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571774 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="extract-content" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.571991 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27f8f80-22e1-441b-a095-ef8d4ad55629" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.572013 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbcba1f-f988-4f6f-9fce-1ef60a331e41" containerName="registry-server" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.573862 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.580555 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.584212 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.584328 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.584354 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v9f\" (UniqueName: \"kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.686359 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.686405 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v9f\" (UniqueName: \"kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.686558 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.687064 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.687284 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.705815 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v9f\" (UniqueName: \"kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f\") pod \"certified-operators-rrfmk\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:55 crc kubenswrapper[4991]: I0929 11:12:55.905546 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:12:56 crc kubenswrapper[4991]: I0929 11:12:56.487888 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.420670 4991 generic.go:334] "Generic (PLEG): container finished" podID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerID="1ca858a640dfae6dc79bea73e34bcd03e97427dd2804b63911db30a0e37791e0" exitCode=0 Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.420883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerDied","Data":"1ca858a640dfae6dc79bea73e34bcd03e97427dd2804b63911db30a0e37791e0"} Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.422156 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerStarted","Data":"2a4cc629184479e1c2db92fa297f57d40d2e21783714166a574d74caf7de1fb7"} Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.422511 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.769787 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.774163 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.784685 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.939047 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87fm\" (UniqueName: \"kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.939353 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:57 crc kubenswrapper[4991]: I0929 11:12:57.939526 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.042236 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.042652 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.042925 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.043022 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87fm\" (UniqueName: \"kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.043160 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.067883 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87fm\" (UniqueName: \"kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm\") pod \"redhat-marketplace-bp272\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.091979 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:12:58 crc kubenswrapper[4991]: I0929 11:12:58.641213 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:12:59 crc kubenswrapper[4991]: I0929 11:12:59.465095 4991 generic.go:334] "Generic (PLEG): container finished" podID="e650d585-1b15-4797-86c9-bb2cddd88522" containerID="9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7" exitCode=0 Sep 29 11:12:59 crc kubenswrapper[4991]: I0929 11:12:59.465390 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerDied","Data":"9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7"} Sep 29 11:12:59 crc kubenswrapper[4991]: I0929 11:12:59.465416 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerStarted","Data":"7475c2fa81e83caef75ffe978a3fe4611a04d7525cbbd12f6be88272f3abb5b8"} Sep 29 11:12:59 crc kubenswrapper[4991]: I0929 11:12:59.470906 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerStarted","Data":"63595a8f5364daf5c30c5bb55a07aff10a35b285b33b7a15754673a1a389e4c0"} Sep 29 11:13:00 crc kubenswrapper[4991]: I0929 11:13:00.482785 4991 generic.go:334] "Generic (PLEG): container finished" podID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerID="63595a8f5364daf5c30c5bb55a07aff10a35b285b33b7a15754673a1a389e4c0" exitCode=0 Sep 29 11:13:00 crc kubenswrapper[4991]: I0929 11:13:00.482893 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerDied","Data":"63595a8f5364daf5c30c5bb55a07aff10a35b285b33b7a15754673a1a389e4c0"} Sep 29 11:13:00 crc kubenswrapper[4991]: I0929 11:13:00.486042 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerStarted","Data":"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf"} Sep 29 11:13:01 crc kubenswrapper[4991]: I0929 11:13:01.498935 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerStarted","Data":"3a9f1a3c2952b1b781d1ec34396057e52bb75a2068681d141d188cca0f09e3e6"} Sep 29 11:13:01 crc kubenswrapper[4991]: I0929 11:13:01.502813 4991 generic.go:334] "Generic (PLEG): container finished" podID="e650d585-1b15-4797-86c9-bb2cddd88522" containerID="4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf" exitCode=0 Sep 29 11:13:01 crc kubenswrapper[4991]: I0929 11:13:01.502858 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerDied","Data":"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf"} Sep 29 11:13:01 crc kubenswrapper[4991]: I0929 11:13:01.527427 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrfmk" podStartSLOduration=2.942724305 podStartE2EDuration="6.527401237s" podCreationTimestamp="2025-09-29 11:12:55 +0000 UTC" firstStartedPulling="2025-09-29 11:12:57.422302556 +0000 UTC m=+5713.278230584" lastFinishedPulling="2025-09-29 11:13:01.006979488 +0000 UTC m=+5716.862907516" observedRunningTime="2025-09-29 11:13:01.520272321 +0000 UTC m=+5717.376200379" watchObservedRunningTime="2025-09-29 11:13:01.527401237 +0000 UTC m=+5717.383329305" Sep 29 11:13:02 crc kubenswrapper[4991]: I0929 11:13:02.518276 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerStarted","Data":"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751"} Sep 29 11:13:02 crc kubenswrapper[4991]: I0929 11:13:02.539046 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bp272" podStartSLOduration=3.009771318 podStartE2EDuration="5.539025073s" podCreationTimestamp="2025-09-29 11:12:57 +0000 UTC" firstStartedPulling="2025-09-29 11:12:59.468574797 +0000 UTC m=+5715.324502825" lastFinishedPulling="2025-09-29 11:13:01.997828552 +0000 UTC m=+5717.853756580" observedRunningTime="2025-09-29 11:13:02.535768388 +0000 UTC m=+5718.391696426" watchObservedRunningTime="2025-09-29 11:13:02.539025073 +0000 UTC m=+5718.394953101" Sep 29 11:13:05 crc kubenswrapper[4991]: I0929 11:13:05.906844 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:05 crc kubenswrapper[4991]: I0929 11:13:05.908133 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:06 crc kubenswrapper[4991]: I0929 11:13:06.297244 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:06 crc kubenswrapper[4991]: I0929 11:13:06.615270 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:07 crc kubenswrapper[4991]: I0929 11:13:07.760082 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:13:07 crc kubenswrapper[4991]: I0929 11:13:07.946585 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:13:07 crc kubenswrapper[4991]: I0929 11:13:07.946637 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:13:08 crc kubenswrapper[4991]: I0929 11:13:08.092562 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:08 crc kubenswrapper[4991]: I0929 11:13:08.092633 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:08 crc kubenswrapper[4991]: I0929 11:13:08.584012 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrfmk" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="registry-server" containerID="cri-o://3a9f1a3c2952b1b781d1ec34396057e52bb75a2068681d141d188cca0f09e3e6" gracePeriod=2 Sep 29 11:13:08 crc kubenswrapper[4991]: I0929 11:13:08.593247 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:08 crc kubenswrapper[4991]: I0929 11:13:08.774356 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:09 crc kubenswrapper[4991]: I0929 11:13:09.596042 4991 generic.go:334] "Generic (PLEG): container finished" podID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerID="3a9f1a3c2952b1b781d1ec34396057e52bb75a2068681d141d188cca0f09e3e6" exitCode=0 Sep 29 11:13:09 crc kubenswrapper[4991]: I0929 11:13:09.596125 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerDied","Data":"3a9f1a3c2952b1b781d1ec34396057e52bb75a2068681d141d188cca0f09e3e6"} Sep 29 11:13:09 crc kubenswrapper[4991]: I0929 11:13:09.930442 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.032996 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6v9f\" (UniqueName: \"kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f\") pod \"83d1e990-e341-40cf-a1d3-a617cf26e83b\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.033241 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content\") pod \"83d1e990-e341-40cf-a1d3-a617cf26e83b\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.033388 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities\") pod \"83d1e990-e341-40cf-a1d3-a617cf26e83b\" (UID: \"83d1e990-e341-40cf-a1d3-a617cf26e83b\") " Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.034052 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities" (OuterVolumeSpecName: "utilities") pod "83d1e990-e341-40cf-a1d3-a617cf26e83b" (UID: "83d1e990-e341-40cf-a1d3-a617cf26e83b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.057428 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f" (OuterVolumeSpecName: "kube-api-access-g6v9f") pod "83d1e990-e341-40cf-a1d3-a617cf26e83b" (UID: "83d1e990-e341-40cf-a1d3-a617cf26e83b"). InnerVolumeSpecName "kube-api-access-g6v9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.089361 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d1e990-e341-40cf-a1d3-a617cf26e83b" (UID: "83d1e990-e341-40cf-a1d3-a617cf26e83b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.135815 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6v9f\" (UniqueName: \"kubernetes.io/projected/83d1e990-e341-40cf-a1d3-a617cf26e83b-kube-api-access-g6v9f\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.135846 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.135854 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d1e990-e341-40cf-a1d3-a617cf26e83b-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.367753 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.606805 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrfmk" event={"ID":"83d1e990-e341-40cf-a1d3-a617cf26e83b","Type":"ContainerDied","Data":"2a4cc629184479e1c2db92fa297f57d40d2e21783714166a574d74caf7de1fb7"} Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.607066 4991 scope.go:117] "RemoveContainer" containerID="3a9f1a3c2952b1b781d1ec34396057e52bb75a2068681d141d188cca0f09e3e6" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.606939 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bp272" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="registry-server" containerID="cri-o://af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751" gracePeriod=2 Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.606840 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrfmk" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.629642 4991 scope.go:117] "RemoveContainer" containerID="63595a8f5364daf5c30c5bb55a07aff10a35b285b33b7a15754673a1a389e4c0" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.647459 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.658097 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrfmk"] Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.667342 4991 scope.go:117] "RemoveContainer" containerID="1ca858a640dfae6dc79bea73e34bcd03e97427dd2804b63911db30a0e37791e0" Sep 29 11:13:10 crc kubenswrapper[4991]: I0929 11:13:10.940830 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" path="/var/lib/kubelet/pods/83d1e990-e341-40cf-a1d3-a617cf26e83b/volumes" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.159116 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.261070 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n87fm\" (UniqueName: \"kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm\") pod \"e650d585-1b15-4797-86c9-bb2cddd88522\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.261346 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities\") pod \"e650d585-1b15-4797-86c9-bb2cddd88522\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.261410 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content\") pod \"e650d585-1b15-4797-86c9-bb2cddd88522\" (UID: \"e650d585-1b15-4797-86c9-bb2cddd88522\") " Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.263682 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities" (OuterVolumeSpecName: "utilities") pod "e650d585-1b15-4797-86c9-bb2cddd88522" (UID: "e650d585-1b15-4797-86c9-bb2cddd88522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.268837 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm" (OuterVolumeSpecName: "kube-api-access-n87fm") pod "e650d585-1b15-4797-86c9-bb2cddd88522" (UID: "e650d585-1b15-4797-86c9-bb2cddd88522"). InnerVolumeSpecName "kube-api-access-n87fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.277031 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e650d585-1b15-4797-86c9-bb2cddd88522" (UID: "e650d585-1b15-4797-86c9-bb2cddd88522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.364553 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n87fm\" (UniqueName: \"kubernetes.io/projected/e650d585-1b15-4797-86c9-bb2cddd88522-kube-api-access-n87fm\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.364622 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.364636 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e650d585-1b15-4797-86c9-bb2cddd88522-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.627772 4991 generic.go:334] "Generic (PLEG): container finished" podID="e650d585-1b15-4797-86c9-bb2cddd88522" containerID="af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751" exitCode=0 Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.627816 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerDied","Data":"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751"} Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.627843 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bp272" event={"ID":"e650d585-1b15-4797-86c9-bb2cddd88522","Type":"ContainerDied","Data":"7475c2fa81e83caef75ffe978a3fe4611a04d7525cbbd12f6be88272f3abb5b8"} Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.627864 4991 scope.go:117] "RemoveContainer" containerID="af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.628036 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bp272" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.657617 4991 scope.go:117] "RemoveContainer" containerID="4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.673656 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.683268 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bp272"] Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.711494 4991 scope.go:117] "RemoveContainer" containerID="9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.750973 4991 scope.go:117] "RemoveContainer" containerID="af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751" Sep 29 11:13:11 crc kubenswrapper[4991]: E0929 11:13:11.751534 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751\": container with ID starting with af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751 not found: ID does not exist" containerID="af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.751631 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751"} err="failed to get container status \"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751\": rpc error: code = NotFound desc = could not find container \"af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751\": container with ID starting with af57fb2020038fca3c8ef1099fd3d769af98ab6bf101dd78223b2b2431ee6751 not found: ID does not exist" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.751693 4991 scope.go:117] "RemoveContainer" containerID="4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf" Sep 29 11:13:11 crc kubenswrapper[4991]: E0929 11:13:11.752199 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf\": container with ID starting with 4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf not found: ID does not exist" containerID="4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.752245 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf"} err="failed to get container status \"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf\": rpc error: code = NotFound desc = could not find container \"4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf\": container with ID starting with 4160d83ccb360d988eb3cb06714ba0ea2c35c261637a610249af68ad1ac8baaf not found: ID does not exist" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.752268 4991 scope.go:117] "RemoveContainer" containerID="9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7" Sep 29 11:13:11 crc kubenswrapper[4991]: E0929 11:13:11.752752 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7\": container with ID starting with 9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7 not found: ID does not exist" containerID="9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7" Sep 29 11:13:11 crc kubenswrapper[4991]: I0929 11:13:11.752783 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7"} err="failed to get container status \"9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7\": rpc error: code = NotFound desc = could not find container \"9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7\": container with ID starting with 9b85b29556fb4ec525726c08caff33d04466e4afbb7c92c8f2229161c0fd01e7 not found: ID does not exist" Sep 29 11:13:12 crc kubenswrapper[4991]: I0929 11:13:12.942067 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" path="/var/lib/kubelet/pods/e650d585-1b15-4797-86c9-bb2cddd88522/volumes" Sep 29 11:13:37 crc kubenswrapper[4991]: I0929 11:13:37.947295 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:13:37 crc kubenswrapper[4991]: I0929 11:13:37.948157 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:14:07 crc kubenswrapper[4991]: I0929 11:14:07.946868 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:14:07 crc kubenswrapper[4991]: I0929 11:14:07.947442 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:14:07 crc kubenswrapper[4991]: I0929 11:14:07.947496 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:14:07 crc kubenswrapper[4991]: I0929 11:14:07.948520 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:14:07 crc kubenswrapper[4991]: I0929 11:14:07.948583 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e" gracePeriod=600 Sep 29 11:14:08 crc kubenswrapper[4991]: I0929 11:14:08.254649 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e" exitCode=0 Sep 29 11:14:08 crc kubenswrapper[4991]: I0929 11:14:08.254711 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e"} Sep 29 11:14:08 crc kubenswrapper[4991]: I0929 11:14:08.254748 4991 scope.go:117] "RemoveContainer" containerID="fbf44eb617fc2a09c836a26ff495566ec46c4fc5b5f8e8d13e6fea96f8f9dff5" Sep 29 11:14:09 crc kubenswrapper[4991]: I0929 11:14:09.280665 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830"} Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.156029 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf"] Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157121 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="extract-utilities" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157140 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="extract-utilities" Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157157 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157166 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157177 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="extract-content" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157186 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="extract-content" Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157214 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="extract-content" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157221 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="extract-content" Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157247 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157256 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: E0929 11:15:00.157285 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="extract-utilities" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157293 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="extract-utilities" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157539 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d1e990-e341-40cf-a1d3-a617cf26e83b" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.157564 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e650d585-1b15-4797-86c9-bb2cddd88522" containerName="registry-server" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.162256 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.165128 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.166749 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.167047 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf"] Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.309848 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5pc\" (UniqueName: \"kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.309917 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.310142 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.412635 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.412883 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr5pc\" (UniqueName: \"kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.412924 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.413823 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.418749 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.428798 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr5pc\" (UniqueName: \"kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc\") pod \"collect-profiles-29319075-zp9lf\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:00 crc kubenswrapper[4991]: I0929 11:15:00.499391 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:01 crc kubenswrapper[4991]: I0929 11:15:01.038729 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf"] Sep 29 11:15:01 crc kubenswrapper[4991]: I0929 11:15:01.850676 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" event={"ID":"bd47f281-9c56-43d9-a327-776f4fc18e68","Type":"ContainerStarted","Data":"6a55b492252da2d5d89395bf7680fca6a2f8464f3194c7973d2d719a3822371d"} Sep 29 11:15:01 crc kubenswrapper[4991]: I0929 11:15:01.850730 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" event={"ID":"bd47f281-9c56-43d9-a327-776f4fc18e68","Type":"ContainerStarted","Data":"22159a1573733c037250718ce7f23060fbeab1c9cb1823f26ef887704935bc89"} Sep 29 11:15:01 crc kubenswrapper[4991]: I0929 11:15:01.872075 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" podStartSLOduration=1.872045317 podStartE2EDuration="1.872045317s" podCreationTimestamp="2025-09-29 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 11:15:01.8664219 +0000 UTC m=+5837.722349938" watchObservedRunningTime="2025-09-29 11:15:01.872045317 +0000 UTC m=+5837.727973345" Sep 29 11:15:02 crc kubenswrapper[4991]: I0929 11:15:02.863544 4991 generic.go:334] "Generic (PLEG): container finished" podID="bd47f281-9c56-43d9-a327-776f4fc18e68" containerID="6a55b492252da2d5d89395bf7680fca6a2f8464f3194c7973d2d719a3822371d" exitCode=0 Sep 29 11:15:02 crc kubenswrapper[4991]: I0929 11:15:02.863615 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" event={"ID":"bd47f281-9c56-43d9-a327-776f4fc18e68","Type":"ContainerDied","Data":"6a55b492252da2d5d89395bf7680fca6a2f8464f3194c7973d2d719a3822371d"} Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.278848 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.456646 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr5pc\" (UniqueName: \"kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc\") pod \"bd47f281-9c56-43d9-a327-776f4fc18e68\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.456971 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume\") pod \"bd47f281-9c56-43d9-a327-776f4fc18e68\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.457181 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume\") pod \"bd47f281-9c56-43d9-a327-776f4fc18e68\" (UID: \"bd47f281-9c56-43d9-a327-776f4fc18e68\") " Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.458213 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd47f281-9c56-43d9-a327-776f4fc18e68" (UID: "bd47f281-9c56-43d9-a327-776f4fc18e68"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.466149 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc" (OuterVolumeSpecName: "kube-api-access-fr5pc") pod "bd47f281-9c56-43d9-a327-776f4fc18e68" (UID: "bd47f281-9c56-43d9-a327-776f4fc18e68"). InnerVolumeSpecName "kube-api-access-fr5pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.466900 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd47f281-9c56-43d9-a327-776f4fc18e68" (UID: "bd47f281-9c56-43d9-a327-776f4fc18e68"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.561420 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd47f281-9c56-43d9-a327-776f4fc18e68-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.561462 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr5pc\" (UniqueName: \"kubernetes.io/projected/bd47f281-9c56-43d9-a327-776f4fc18e68-kube-api-access-fr5pc\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.561475 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd47f281-9c56-43d9-a327-776f4fc18e68-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.891392 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" event={"ID":"bd47f281-9c56-43d9-a327-776f4fc18e68","Type":"ContainerDied","Data":"22159a1573733c037250718ce7f23060fbeab1c9cb1823f26ef887704935bc89"} Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.891703 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22159a1573733c037250718ce7f23060fbeab1c9cb1823f26ef887704935bc89" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.891444 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf" Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.943816 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt"] Sep 29 11:15:04 crc kubenswrapper[4991]: I0929 11:15:04.947008 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319030-fnpwt"] Sep 29 11:15:06 crc kubenswrapper[4991]: I0929 11:15:06.940122 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3c413b-e822-46d6-b4a9-768a911355a3" path="/var/lib/kubelet/pods/ef3c413b-e822-46d6-b4a9-768a911355a3/volumes" Sep 29 11:15:25 crc kubenswrapper[4991]: I0929 11:15:25.548485 4991 scope.go:117] "RemoveContainer" containerID="c01d29bf2241ffff842552fdd4188722957e4b3d6bcb244d2684fb0b3ff91bdd" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.724249 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:35 crc kubenswrapper[4991]: E0929 11:15:35.725610 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd47f281-9c56-43d9-a327-776f4fc18e68" containerName="collect-profiles" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.725631 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd47f281-9c56-43d9-a327-776f4fc18e68" containerName="collect-profiles" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.725912 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd47f281-9c56-43d9-a327-776f4fc18e68" containerName="collect-profiles" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.728369 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.740719 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.820790 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.820848 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.820965 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2qcv\" (UniqueName: \"kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.923655 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.923700 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.923777 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2qcv\" (UniqueName: \"kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.924491 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.924552 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:35 crc kubenswrapper[4991]: I0929 11:15:35.947186 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2qcv\" (UniqueName: \"kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv\") pod \"redhat-operators-zpwnv\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:36 crc kubenswrapper[4991]: I0929 11:15:36.052271 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:36 crc kubenswrapper[4991]: I0929 11:15:36.528313 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:37 crc kubenswrapper[4991]: I0929 11:15:37.230284 4991 generic.go:334] "Generic (PLEG): container finished" podID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerID="cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128" exitCode=0 Sep 29 11:15:37 crc kubenswrapper[4991]: I0929 11:15:37.230354 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerDied","Data":"cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128"} Sep 29 11:15:37 crc kubenswrapper[4991]: I0929 11:15:37.230621 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerStarted","Data":"5ae0acc7a4955bad778e3dac9144fc1b58e92cd86ada984f99e605fa51697d61"} Sep 29 11:15:38 crc kubenswrapper[4991]: I0929 11:15:38.241867 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerStarted","Data":"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981"} Sep 29 11:15:39 crc kubenswrapper[4991]: I0929 11:15:39.263279 4991 generic.go:334] "Generic (PLEG): container finished" podID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerID="1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981" exitCode=0 Sep 29 11:15:39 crc kubenswrapper[4991]: I0929 11:15:39.263355 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerDied","Data":"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981"} Sep 29 11:15:40 crc kubenswrapper[4991]: I0929 11:15:40.341219 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerStarted","Data":"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d"} Sep 29 11:15:40 crc kubenswrapper[4991]: I0929 11:15:40.392787 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zpwnv" podStartSLOduration=2.830172308 podStartE2EDuration="5.392761543s" podCreationTimestamp="2025-09-29 11:15:35 +0000 UTC" firstStartedPulling="2025-09-29 11:15:37.232404233 +0000 UTC m=+5873.088332261" lastFinishedPulling="2025-09-29 11:15:39.794993468 +0000 UTC m=+5875.650921496" observedRunningTime="2025-09-29 11:15:40.384467657 +0000 UTC m=+5876.240395685" watchObservedRunningTime="2025-09-29 11:15:40.392761543 +0000 UTC m=+5876.248689571" Sep 29 11:15:46 crc kubenswrapper[4991]: I0929 11:15:46.052555 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:46 crc kubenswrapper[4991]: I0929 11:15:46.053101 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:47 crc kubenswrapper[4991]: I0929 11:15:47.141800 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zpwnv" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="registry-server" probeResult="failure" output=< Sep 29 11:15:47 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:15:47 crc kubenswrapper[4991]: > Sep 29 11:15:56 crc kubenswrapper[4991]: I0929 11:15:56.106539 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:56 crc kubenswrapper[4991]: I0929 11:15:56.155654 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:56 crc kubenswrapper[4991]: I0929 11:15:56.344167 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:57 crc kubenswrapper[4991]: I0929 11:15:57.515369 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zpwnv" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="registry-server" containerID="cri-o://02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d" gracePeriod=2 Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.009717 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.146110 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities\") pod \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.146301 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2qcv\" (UniqueName: \"kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv\") pod \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.146516 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content\") pod \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\" (UID: \"04182bb3-9548-4c2f-b1fb-d66721ea2f0d\") " Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.147461 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities" (OuterVolumeSpecName: "utilities") pod "04182bb3-9548-4c2f-b1fb-d66721ea2f0d" (UID: "04182bb3-9548-4c2f-b1fb-d66721ea2f0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.166689 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv" (OuterVolumeSpecName: "kube-api-access-h2qcv") pod "04182bb3-9548-4c2f-b1fb-d66721ea2f0d" (UID: "04182bb3-9548-4c2f-b1fb-d66721ea2f0d"). InnerVolumeSpecName "kube-api-access-h2qcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.249758 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.249808 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2qcv\" (UniqueName: \"kubernetes.io/projected/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-kube-api-access-h2qcv\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.253256 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04182bb3-9548-4c2f-b1fb-d66721ea2f0d" (UID: "04182bb3-9548-4c2f-b1fb-d66721ea2f0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.352071 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04182bb3-9548-4c2f-b1fb-d66721ea2f0d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.528493 4991 generic.go:334] "Generic (PLEG): container finished" podID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerID="02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d" exitCode=0 Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.528555 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerDied","Data":"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d"} Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.528588 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwnv" event={"ID":"04182bb3-9548-4c2f-b1fb-d66721ea2f0d","Type":"ContainerDied","Data":"5ae0acc7a4955bad778e3dac9144fc1b58e92cd86ada984f99e605fa51697d61"} Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.528607 4991 scope.go:117] "RemoveContainer" containerID="02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.528721 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwnv" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.567778 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.569709 4991 scope.go:117] "RemoveContainer" containerID="1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.579236 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zpwnv"] Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.601806 4991 scope.go:117] "RemoveContainer" containerID="cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.664025 4991 scope.go:117] "RemoveContainer" containerID="02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d" Sep 29 11:15:58 crc kubenswrapper[4991]: E0929 11:15:58.665265 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d\": container with ID starting with 02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d not found: ID does not exist" containerID="02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.665379 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d"} err="failed to get container status \"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d\": rpc error: code = NotFound desc = could not find container \"02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d\": container with ID starting with 02f3ca20e7f7d033d15decc659ee8c7e7634c389384e0a8b8fe61dd292912f1d not found: ID does not exist" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.665495 4991 scope.go:117] "RemoveContainer" containerID="1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981" Sep 29 11:15:58 crc kubenswrapper[4991]: E0929 11:15:58.666012 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981\": container with ID starting with 1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981 not found: ID does not exist" containerID="1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.666120 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981"} err="failed to get container status \"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981\": rpc error: code = NotFound desc = could not find container \"1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981\": container with ID starting with 1f1860a49d2557ee5bf216705e8854b4b4238204446a9652d2a0b03ca83b9981 not found: ID does not exist" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.666210 4991 scope.go:117] "RemoveContainer" containerID="cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128" Sep 29 11:15:58 crc kubenswrapper[4991]: E0929 11:15:58.666581 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128\": container with ID starting with cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128 not found: ID does not exist" containerID="cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.666688 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128"} err="failed to get container status \"cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128\": rpc error: code = NotFound desc = could not find container \"cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128\": container with ID starting with cffb327d12eece7bbbad74187c25fbf2456b45c6ecdd1940af573949c29f0128 not found: ID does not exist" Sep 29 11:15:58 crc kubenswrapper[4991]: I0929 11:15:58.959065 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" path="/var/lib/kubelet/pods/04182bb3-9548-4c2f-b1fb-d66721ea2f0d/volumes" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.307359 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:24 crc kubenswrapper[4991]: E0929 11:16:24.308356 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="registry-server" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.308370 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="registry-server" Sep 29 11:16:24 crc kubenswrapper[4991]: E0929 11:16:24.308387 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="extract-content" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.308394 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="extract-content" Sep 29 11:16:24 crc kubenswrapper[4991]: E0929 11:16:24.308440 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="extract-utilities" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.308448 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="extract-utilities" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.308702 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="04182bb3-9548-4c2f-b1fb-d66721ea2f0d" containerName="registry-server" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.310496 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.329082 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.390464 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.390811 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6wl\" (UniqueName: \"kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.390912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.493998 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.494116 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6wl\" (UniqueName: \"kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.494261 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.494798 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.494804 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.516377 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6wl\" (UniqueName: \"kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl\") pod \"community-operators-cdghv\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:24 crc kubenswrapper[4991]: I0929 11:16:24.640886 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:25 crc kubenswrapper[4991]: I0929 11:16:25.206635 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:25 crc kubenswrapper[4991]: I0929 11:16:25.810929 4991 generic.go:334] "Generic (PLEG): container finished" podID="6e330529-38b1-4916-aceb-dd93d251eb36" containerID="e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c" exitCode=0 Sep 29 11:16:25 crc kubenswrapper[4991]: I0929 11:16:25.811004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerDied","Data":"e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c"} Sep 29 11:16:25 crc kubenswrapper[4991]: I0929 11:16:25.811216 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerStarted","Data":"cd1e982895f4e5350c91bc0a39c170c9f28a42df49da06b154610ccefb1eaf92"} Sep 29 11:16:27 crc kubenswrapper[4991]: I0929 11:16:27.837698 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerStarted","Data":"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec"} Sep 29 11:16:28 crc kubenswrapper[4991]: I0929 11:16:28.856922 4991 generic.go:334] "Generic (PLEG): container finished" podID="6e330529-38b1-4916-aceb-dd93d251eb36" containerID="0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec" exitCode=0 Sep 29 11:16:28 crc kubenswrapper[4991]: I0929 11:16:28.857020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerDied","Data":"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec"} Sep 29 11:16:29 crc kubenswrapper[4991]: I0929 11:16:29.872873 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerStarted","Data":"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc"} Sep 29 11:16:29 crc kubenswrapper[4991]: I0929 11:16:29.898778 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cdghv" podStartSLOduration=2.480344449 podStartE2EDuration="5.898753736s" podCreationTimestamp="2025-09-29 11:16:24 +0000 UTC" firstStartedPulling="2025-09-29 11:16:25.813732828 +0000 UTC m=+5921.669660856" lastFinishedPulling="2025-09-29 11:16:29.232142115 +0000 UTC m=+5925.088070143" observedRunningTime="2025-09-29 11:16:29.89085295 +0000 UTC m=+5925.746780978" watchObservedRunningTime="2025-09-29 11:16:29.898753736 +0000 UTC m=+5925.754681764" Sep 29 11:16:34 crc kubenswrapper[4991]: I0929 11:16:34.641421 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:34 crc kubenswrapper[4991]: I0929 11:16:34.641979 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:34 crc kubenswrapper[4991]: I0929 11:16:34.687099 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:35 crc kubenswrapper[4991]: I0929 11:16:35.040167 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:35 crc kubenswrapper[4991]: I0929 11:16:35.095928 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:36 crc kubenswrapper[4991]: I0929 11:16:36.949359 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cdghv" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="registry-server" containerID="cri-o://3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc" gracePeriod=2 Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.491252 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.622511 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv6wl\" (UniqueName: \"kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl\") pod \"6e330529-38b1-4916-aceb-dd93d251eb36\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.622651 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities\") pod \"6e330529-38b1-4916-aceb-dd93d251eb36\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.622764 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content\") pod \"6e330529-38b1-4916-aceb-dd93d251eb36\" (UID: \"6e330529-38b1-4916-aceb-dd93d251eb36\") " Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.623881 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities" (OuterVolumeSpecName: "utilities") pod "6e330529-38b1-4916-aceb-dd93d251eb36" (UID: "6e330529-38b1-4916-aceb-dd93d251eb36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.635399 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl" (OuterVolumeSpecName: "kube-api-access-fv6wl") pod "6e330529-38b1-4916-aceb-dd93d251eb36" (UID: "6e330529-38b1-4916-aceb-dd93d251eb36"). InnerVolumeSpecName "kube-api-access-fv6wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.726221 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv6wl\" (UniqueName: \"kubernetes.io/projected/6e330529-38b1-4916-aceb-dd93d251eb36-kube-api-access-fv6wl\") on node \"crc\" DevicePath \"\"" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.726268 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.946459 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.947127 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.960695 4991 generic.go:334] "Generic (PLEG): container finished" podID="6e330529-38b1-4916-aceb-dd93d251eb36" containerID="3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc" exitCode=0 Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.960747 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdghv" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.960755 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerDied","Data":"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc"} Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.960873 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdghv" event={"ID":"6e330529-38b1-4916-aceb-dd93d251eb36","Type":"ContainerDied","Data":"cd1e982895f4e5350c91bc0a39c170c9f28a42df49da06b154610ccefb1eaf92"} Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.960891 4991 scope.go:117] "RemoveContainer" containerID="3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc" Sep 29 11:16:37 crc kubenswrapper[4991]: I0929 11:16:37.984175 4991 scope.go:117] "RemoveContainer" containerID="0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.006966 4991 scope.go:117] "RemoveContainer" containerID="e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.050280 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e330529-38b1-4916-aceb-dd93d251eb36" (UID: "6e330529-38b1-4916-aceb-dd93d251eb36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.083104 4991 scope.go:117] "RemoveContainer" containerID="3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc" Sep 29 11:16:38 crc kubenswrapper[4991]: E0929 11:16:38.083647 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc\": container with ID starting with 3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc not found: ID does not exist" containerID="3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.083765 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc"} err="failed to get container status \"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc\": rpc error: code = NotFound desc = could not find container \"3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc\": container with ID starting with 3b06cca2656062209b5b5122ca2f7d5edd745adf6737a7085f70852819b190dc not found: ID does not exist" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.083842 4991 scope.go:117] "RemoveContainer" containerID="0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec" Sep 29 11:16:38 crc kubenswrapper[4991]: E0929 11:16:38.084362 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec\": container with ID starting with 0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec not found: ID does not exist" containerID="0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.084445 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec"} err="failed to get container status \"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec\": rpc error: code = NotFound desc = could not find container \"0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec\": container with ID starting with 0ad01e9bcde083736590b5f44c1ae8fc78b63bdb7b831a7801a45f61d18fd0ec not found: ID does not exist" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.084478 4991 scope.go:117] "RemoveContainer" containerID="e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c" Sep 29 11:16:38 crc kubenswrapper[4991]: E0929 11:16:38.085058 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c\": container with ID starting with e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c not found: ID does not exist" containerID="e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.085090 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c"} err="failed to get container status \"e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c\": rpc error: code = NotFound desc = could not find container \"e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c\": container with ID starting with e31bf099af348728ad81b8bd412efa741ba6c29eeeff6d4f3c7e470edb7fb08c not found: ID does not exist" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.138390 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e330529-38b1-4916-aceb-dd93d251eb36-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.324255 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.334382 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cdghv"] Sep 29 11:16:38 crc kubenswrapper[4991]: I0929 11:16:38.942356 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" path="/var/lib/kubelet/pods/6e330529-38b1-4916-aceb-dd93d251eb36/volumes" Sep 29 11:17:07 crc kubenswrapper[4991]: I0929 11:17:07.946472 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:17:07 crc kubenswrapper[4991]: I0929 11:17:07.947170 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:17:37 crc kubenswrapper[4991]: I0929 11:17:37.946813 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:17:37 crc kubenswrapper[4991]: I0929 11:17:37.947347 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:17:37 crc kubenswrapper[4991]: I0929 11:17:37.947394 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:17:37 crc kubenswrapper[4991]: I0929 11:17:37.948203 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:17:37 crc kubenswrapper[4991]: I0929 11:17:37.948256 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" gracePeriod=600 Sep 29 11:17:38 crc kubenswrapper[4991]: E0929 11:17:38.066580 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:17:38 crc kubenswrapper[4991]: I0929 11:17:38.662206 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" exitCode=0 Sep 29 11:17:38 crc kubenswrapper[4991]: I0929 11:17:38.662246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830"} Sep 29 11:17:38 crc kubenswrapper[4991]: I0929 11:17:38.662274 4991 scope.go:117] "RemoveContainer" containerID="6899f1248f1ab543ca2eab35982dcdd38005a8b1036e82393572e638c1670d8e" Sep 29 11:17:38 crc kubenswrapper[4991]: I0929 11:17:38.662971 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:17:38 crc kubenswrapper[4991]: E0929 11:17:38.663404 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:17:50 crc kubenswrapper[4991]: I0929 11:17:50.927591 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:17:50 crc kubenswrapper[4991]: E0929 11:17:50.928577 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:18:05 crc kubenswrapper[4991]: I0929 11:18:05.926906 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:18:05 crc kubenswrapper[4991]: E0929 11:18:05.927593 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:18:20 crc kubenswrapper[4991]: I0929 11:18:20.926536 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:18:20 crc kubenswrapper[4991]: E0929 11:18:20.927498 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:18:31 crc kubenswrapper[4991]: I0929 11:18:31.926833 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:18:31 crc kubenswrapper[4991]: E0929 11:18:31.927753 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:18:45 crc kubenswrapper[4991]: I0929 11:18:45.926981 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:18:45 crc kubenswrapper[4991]: E0929 11:18:45.927740 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:19:00 crc kubenswrapper[4991]: I0929 11:19:00.927550 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:19:00 crc kubenswrapper[4991]: E0929 11:19:00.928321 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:19:11 crc kubenswrapper[4991]: I0929 11:19:11.926822 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:19:11 crc kubenswrapper[4991]: E0929 11:19:11.927587 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:19:22 crc kubenswrapper[4991]: I0929 11:19:22.928979 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:19:22 crc kubenswrapper[4991]: E0929 11:19:22.929812 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:19:36 crc kubenswrapper[4991]: I0929 11:19:36.926833 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:19:36 crc kubenswrapper[4991]: E0929 11:19:36.927840 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:19:48 crc kubenswrapper[4991]: I0929 11:19:48.928163 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:19:48 crc kubenswrapper[4991]: E0929 11:19:48.928762 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:20:01 crc kubenswrapper[4991]: I0929 11:20:01.926676 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:20:01 crc kubenswrapper[4991]: E0929 11:20:01.927464 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:20:15 crc kubenswrapper[4991]: I0929 11:20:15.927165 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:20:15 crc kubenswrapper[4991]: E0929 11:20:15.928034 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:20:26 crc kubenswrapper[4991]: I0929 11:20:26.928024 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:20:26 crc kubenswrapper[4991]: E0929 11:20:26.928878 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:20:40 crc kubenswrapper[4991]: I0929 11:20:40.927484 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:20:40 crc kubenswrapper[4991]: E0929 11:20:40.928331 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:20:51 crc kubenswrapper[4991]: I0929 11:20:51.926035 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:20:51 crc kubenswrapper[4991]: E0929 11:20:51.926730 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:21:05 crc kubenswrapper[4991]: I0929 11:21:05.926642 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:21:05 crc kubenswrapper[4991]: E0929 11:21:05.928019 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:21:17 crc kubenswrapper[4991]: I0929 11:21:17.926691 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:21:17 crc kubenswrapper[4991]: E0929 11:21:17.927509 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:21:31 crc kubenswrapper[4991]: I0929 11:21:31.926243 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:21:31 crc kubenswrapper[4991]: E0929 11:21:31.927193 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:21:46 crc kubenswrapper[4991]: I0929 11:21:46.927282 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:21:46 crc kubenswrapper[4991]: E0929 11:21:46.928089 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:21:59 crc kubenswrapper[4991]: I0929 11:21:59.927426 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:21:59 crc kubenswrapper[4991]: E0929 11:21:59.928262 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:22:11 crc kubenswrapper[4991]: I0929 11:22:11.926332 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:22:11 crc kubenswrapper[4991]: E0929 11:22:11.928080 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:22:26 crc kubenswrapper[4991]: I0929 11:22:26.926667 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:22:26 crc kubenswrapper[4991]: E0929 11:22:26.927439 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:22:40 crc kubenswrapper[4991]: I0929 11:22:40.927026 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:22:42 crc kubenswrapper[4991]: I0929 11:22:42.080736 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300"} Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.255264 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:18 crc kubenswrapper[4991]: E0929 11:23:18.256535 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="registry-server" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.256556 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="registry-server" Sep 29 11:23:18 crc kubenswrapper[4991]: E0929 11:23:18.256573 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="extract-utilities" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.256581 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="extract-utilities" Sep 29 11:23:18 crc kubenswrapper[4991]: E0929 11:23:18.256600 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="extract-content" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.256622 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="extract-content" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.256918 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e330529-38b1-4916-aceb-dd93d251eb36" containerName="registry-server" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.259099 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.268809 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.394282 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.394707 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.394815 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ths\" (UniqueName: \"kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.497431 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.497490 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ths\" (UniqueName: \"kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.497570 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.498049 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.498064 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.517097 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ths\" (UniqueName: \"kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths\") pod \"redhat-marketplace-phnzx\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:18 crc kubenswrapper[4991]: I0929 11:23:18.581353 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:19 crc kubenswrapper[4991]: I0929 11:23:19.086104 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:19 crc kubenswrapper[4991]: I0929 11:23:19.492903 4991 generic.go:334] "Generic (PLEG): container finished" podID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerID="f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf" exitCode=0 Sep 29 11:23:19 crc kubenswrapper[4991]: I0929 11:23:19.492998 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerDied","Data":"f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf"} Sep 29 11:23:19 crc kubenswrapper[4991]: I0929 11:23:19.493189 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerStarted","Data":"3dbce7877d797ceb496385fd52cd925e13e6bf3f0082e6c300edf2679db940d9"} Sep 29 11:23:19 crc kubenswrapper[4991]: I0929 11:23:19.495067 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:23:21 crc kubenswrapper[4991]: I0929 11:23:21.514408 4991 generic.go:334] "Generic (PLEG): container finished" podID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerID="899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f" exitCode=0 Sep 29 11:23:21 crc kubenswrapper[4991]: I0929 11:23:21.514498 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerDied","Data":"899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f"} Sep 29 11:23:22 crc kubenswrapper[4991]: I0929 11:23:22.528255 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerStarted","Data":"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3"} Sep 29 11:23:22 crc kubenswrapper[4991]: I0929 11:23:22.559786 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phnzx" podStartSLOduration=2.060021959 podStartE2EDuration="4.559762509s" podCreationTimestamp="2025-09-29 11:23:18 +0000 UTC" firstStartedPulling="2025-09-29 11:23:19.494838751 +0000 UTC m=+6335.350766769" lastFinishedPulling="2025-09-29 11:23:21.994579291 +0000 UTC m=+6337.850507319" observedRunningTime="2025-09-29 11:23:22.546822862 +0000 UTC m=+6338.402750890" watchObservedRunningTime="2025-09-29 11:23:22.559762509 +0000 UTC m=+6338.415690537" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.652843 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.655508 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.671675 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.841174 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.842324 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.842467 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dpd\" (UniqueName: \"kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.945676 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.945779 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpd\" (UniqueName: \"kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.946088 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.946675 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.946895 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.976442 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dpd\" (UniqueName: \"kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd\") pod \"certified-operators-dlvmb\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:23 crc kubenswrapper[4991]: I0929 11:23:23.987125 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:24 crc kubenswrapper[4991]: W0929 11:23:24.619464 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc42db07c_99b8_48ab_8034_df5a97d071a1.slice/crio-c071eac190be2724a6ae9975bba8d76b299afe5a594a0d885a73cfc9cbc76b5b WatchSource:0}: Error finding container c071eac190be2724a6ae9975bba8d76b299afe5a594a0d885a73cfc9cbc76b5b: Status 404 returned error can't find the container with id c071eac190be2724a6ae9975bba8d76b299afe5a594a0d885a73cfc9cbc76b5b Sep 29 11:23:24 crc kubenswrapper[4991]: I0929 11:23:24.619587 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:25 crc kubenswrapper[4991]: I0929 11:23:25.570763 4991 generic.go:334] "Generic (PLEG): container finished" podID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerID="57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5" exitCode=0 Sep 29 11:23:25 crc kubenswrapper[4991]: I0929 11:23:25.570869 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerDied","Data":"57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5"} Sep 29 11:23:25 crc kubenswrapper[4991]: I0929 11:23:25.573388 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerStarted","Data":"c071eac190be2724a6ae9975bba8d76b299afe5a594a0d885a73cfc9cbc76b5b"} Sep 29 11:23:27 crc kubenswrapper[4991]: I0929 11:23:27.602653 4991 generic.go:334] "Generic (PLEG): container finished" podID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerID="0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806" exitCode=0 Sep 29 11:23:27 crc kubenswrapper[4991]: I0929 11:23:27.603274 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerDied","Data":"0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806"} Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.581878 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.584409 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.614595 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerStarted","Data":"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d"} Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.640156 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.647217 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dlvmb" podStartSLOduration=2.9521093240000003 podStartE2EDuration="5.64719906s" podCreationTimestamp="2025-09-29 11:23:23 +0000 UTC" firstStartedPulling="2025-09-29 11:23:25.572735444 +0000 UTC m=+6341.428663472" lastFinishedPulling="2025-09-29 11:23:28.26782518 +0000 UTC m=+6344.123753208" observedRunningTime="2025-09-29 11:23:28.636309376 +0000 UTC m=+6344.492237414" watchObservedRunningTime="2025-09-29 11:23:28.64719906 +0000 UTC m=+6344.503127088" Sep 29 11:23:28 crc kubenswrapper[4991]: I0929 11:23:28.692486 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:29 crc kubenswrapper[4991]: I0929 11:23:29.833482 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:30 crc kubenswrapper[4991]: I0929 11:23:30.633707 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phnzx" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="registry-server" containerID="cri-o://03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3" gracePeriod=2 Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.154338 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.237540 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content\") pod \"a0c076be-89dc-4cf9-8c72-da940b015ab8\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.237980 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities\") pod \"a0c076be-89dc-4cf9-8c72-da940b015ab8\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.238358 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ths\" (UniqueName: \"kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths\") pod \"a0c076be-89dc-4cf9-8c72-da940b015ab8\" (UID: \"a0c076be-89dc-4cf9-8c72-da940b015ab8\") " Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.238629 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities" (OuterVolumeSpecName: "utilities") pod "a0c076be-89dc-4cf9-8c72-da940b015ab8" (UID: "a0c076be-89dc-4cf9-8c72-da940b015ab8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.240573 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.243756 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths" (OuterVolumeSpecName: "kube-api-access-s2ths") pod "a0c076be-89dc-4cf9-8c72-da940b015ab8" (UID: "a0c076be-89dc-4cf9-8c72-da940b015ab8"). InnerVolumeSpecName "kube-api-access-s2ths". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.248616 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0c076be-89dc-4cf9-8c72-da940b015ab8" (UID: "a0c076be-89dc-4cf9-8c72-da940b015ab8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.342586 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c076be-89dc-4cf9-8c72-da940b015ab8-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.342833 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ths\" (UniqueName: \"kubernetes.io/projected/a0c076be-89dc-4cf9-8c72-da940b015ab8-kube-api-access-s2ths\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.647372 4991 generic.go:334] "Generic (PLEG): container finished" podID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerID="03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3" exitCode=0 Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.647456 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phnzx" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.647478 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerDied","Data":"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3"} Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.648438 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phnzx" event={"ID":"a0c076be-89dc-4cf9-8c72-da940b015ab8","Type":"ContainerDied","Data":"3dbce7877d797ceb496385fd52cd925e13e6bf3f0082e6c300edf2679db940d9"} Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.648477 4991 scope.go:117] "RemoveContainer" containerID="03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.674426 4991 scope.go:117] "RemoveContainer" containerID="899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.695875 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.708664 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phnzx"] Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.716243 4991 scope.go:117] "RemoveContainer" containerID="f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.767993 4991 scope.go:117] "RemoveContainer" containerID="03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3" Sep 29 11:23:31 crc kubenswrapper[4991]: E0929 11:23:31.768571 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3\": container with ID starting with 03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3 not found: ID does not exist" containerID="03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.768678 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3"} err="failed to get container status \"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3\": rpc error: code = NotFound desc = could not find container \"03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3\": container with ID starting with 03826e57a644167988bd9f8949ad03bbd9c1e34197744a1df2cb69d500bd6fd3 not found: ID does not exist" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.768705 4991 scope.go:117] "RemoveContainer" containerID="899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f" Sep 29 11:23:31 crc kubenswrapper[4991]: E0929 11:23:31.769180 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f\": container with ID starting with 899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f not found: ID does not exist" containerID="899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.769203 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f"} err="failed to get container status \"899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f\": rpc error: code = NotFound desc = could not find container \"899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f\": container with ID starting with 899b5f92c107850a45292903ecffb8c18171f95b3fc09b514ddf5a25bbdfa15f not found: ID does not exist" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.769220 4991 scope.go:117] "RemoveContainer" containerID="f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf" Sep 29 11:23:31 crc kubenswrapper[4991]: E0929 11:23:31.770599 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf\": container with ID starting with f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf not found: ID does not exist" containerID="f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf" Sep 29 11:23:31 crc kubenswrapper[4991]: I0929 11:23:31.770631 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf"} err="failed to get container status \"f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf\": rpc error: code = NotFound desc = could not find container \"f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf\": container with ID starting with f3c12dbbc411390f1b50c793693e58290a7edc78e388404a3db3944964816dcf not found: ID does not exist" Sep 29 11:23:31 crc kubenswrapper[4991]: E0929 11:23:31.857103 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c076be_89dc_4cf9_8c72_da940b015ab8.slice/crio-3dbce7877d797ceb496385fd52cd925e13e6bf3f0082e6c300edf2679db940d9\": RecentStats: unable to find data in memory cache]" Sep 29 11:23:32 crc kubenswrapper[4991]: I0929 11:23:32.941687 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" path="/var/lib/kubelet/pods/a0c076be-89dc-4cf9-8c72-da940b015ab8/volumes" Sep 29 11:23:33 crc kubenswrapper[4991]: I0929 11:23:33.987737 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:33 crc kubenswrapper[4991]: I0929 11:23:33.987814 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:34 crc kubenswrapper[4991]: I0929 11:23:34.035754 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:34 crc kubenswrapper[4991]: I0929 11:23:34.738579 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:35 crc kubenswrapper[4991]: I0929 11:23:35.229397 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:36 crc kubenswrapper[4991]: I0929 11:23:36.704739 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dlvmb" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="registry-server" containerID="cri-o://fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d" gracePeriod=2 Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.303671 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.405490 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content\") pod \"c42db07c-99b8-48ab-8034-df5a97d071a1\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.405732 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities\") pod \"c42db07c-99b8-48ab-8034-df5a97d071a1\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.405818 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dpd\" (UniqueName: \"kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd\") pod \"c42db07c-99b8-48ab-8034-df5a97d071a1\" (UID: \"c42db07c-99b8-48ab-8034-df5a97d071a1\") " Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.406549 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities" (OuterVolumeSpecName: "utilities") pod "c42db07c-99b8-48ab-8034-df5a97d071a1" (UID: "c42db07c-99b8-48ab-8034-df5a97d071a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.416120 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd" (OuterVolumeSpecName: "kube-api-access-p6dpd") pod "c42db07c-99b8-48ab-8034-df5a97d071a1" (UID: "c42db07c-99b8-48ab-8034-df5a97d071a1"). InnerVolumeSpecName "kube-api-access-p6dpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.458393 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c42db07c-99b8-48ab-8034-df5a97d071a1" (UID: "c42db07c-99b8-48ab-8034-df5a97d071a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.508841 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dpd\" (UniqueName: \"kubernetes.io/projected/c42db07c-99b8-48ab-8034-df5a97d071a1-kube-api-access-p6dpd\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.508881 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.508894 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c42db07c-99b8-48ab-8034-df5a97d071a1-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.719623 4991 generic.go:334] "Generic (PLEG): container finished" podID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerID="fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d" exitCode=0 Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.719717 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerDied","Data":"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d"} Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.719841 4991 scope.go:117] "RemoveContainer" containerID="fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.720108 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlvmb" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.720191 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlvmb" event={"ID":"c42db07c-99b8-48ab-8034-df5a97d071a1","Type":"ContainerDied","Data":"c071eac190be2724a6ae9975bba8d76b299afe5a594a0d885a73cfc9cbc76b5b"} Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.759546 4991 scope.go:117] "RemoveContainer" containerID="0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.764380 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.784895 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dlvmb"] Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.795879 4991 scope.go:117] "RemoveContainer" containerID="57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.858293 4991 scope.go:117] "RemoveContainer" containerID="fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d" Sep 29 11:23:37 crc kubenswrapper[4991]: E0929 11:23:37.858681 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d\": container with ID starting with fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d not found: ID does not exist" containerID="fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.858711 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d"} err="failed to get container status \"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d\": rpc error: code = NotFound desc = could not find container \"fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d\": container with ID starting with fb3fe0de05875d61b59fbfc2b1a40e6beffbdf02a19a47774620f89f8bc3643d not found: ID does not exist" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.858730 4991 scope.go:117] "RemoveContainer" containerID="0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806" Sep 29 11:23:37 crc kubenswrapper[4991]: E0929 11:23:37.859060 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806\": container with ID starting with 0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806 not found: ID does not exist" containerID="0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.859080 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806"} err="failed to get container status \"0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806\": rpc error: code = NotFound desc = could not find container \"0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806\": container with ID starting with 0e931b9e50ce673ac8e413456e94f54f2240b852299869de83354621e0505806 not found: ID does not exist" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.859093 4991 scope.go:117] "RemoveContainer" containerID="57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5" Sep 29 11:23:37 crc kubenswrapper[4991]: E0929 11:23:37.859341 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5\": container with ID starting with 57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5 not found: ID does not exist" containerID="57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5" Sep 29 11:23:37 crc kubenswrapper[4991]: I0929 11:23:37.859367 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5"} err="failed to get container status \"57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5\": rpc error: code = NotFound desc = could not find container \"57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5\": container with ID starting with 57cb2c22f732b118b6642c6d937157282ed8914b056a4f8103de3c3e82b6f0d5 not found: ID does not exist" Sep 29 11:23:38 crc kubenswrapper[4991]: I0929 11:23:38.941426 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" path="/var/lib/kubelet/pods/c42db07c-99b8-48ab-8034-df5a97d071a1/volumes" Sep 29 11:23:51 crc kubenswrapper[4991]: I0929 11:23:51.666027 4991 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-67c9dd4f47-ndrxn" podUID="ce4f4bfc-55ef-4360-905e-df84e5d932b2" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Sep 29 11:25:07 crc kubenswrapper[4991]: I0929 11:25:07.947296 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:25:07 crc kubenswrapper[4991]: I0929 11:25:07.947898 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:25:37 crc kubenswrapper[4991]: I0929 11:25:37.946457 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:25:37 crc kubenswrapper[4991]: I0929 11:25:37.947245 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:26:07 crc kubenswrapper[4991]: I0929 11:26:07.946995 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:26:07 crc kubenswrapper[4991]: I0929 11:26:07.947647 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:26:07 crc kubenswrapper[4991]: I0929 11:26:07.947702 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:26:07 crc kubenswrapper[4991]: I0929 11:26:07.949197 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:26:07 crc kubenswrapper[4991]: I0929 11:26:07.949278 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300" gracePeriod=600 Sep 29 11:26:08 crc kubenswrapper[4991]: I0929 11:26:08.446796 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300" exitCode=0 Sep 29 11:26:08 crc kubenswrapper[4991]: I0929 11:26:08.447102 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300"} Sep 29 11:26:08 crc kubenswrapper[4991]: I0929 11:26:08.447175 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a"} Sep 29 11:26:08 crc kubenswrapper[4991]: I0929 11:26:08.447199 4991 scope.go:117] "RemoveContainer" containerID="9f8656593607bfe7517cd0b74771723c424b16cf0ad679187504db003d556830" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.750485 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.754261 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="extract-utilities" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.754410 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="extract-utilities" Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.754517 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="extract-content" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.754599 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="extract-content" Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.754688 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="extract-content" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.754772 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="extract-content" Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.754870 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.755010 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.755134 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.755230 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: E0929 11:26:15.755351 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="extract-utilities" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.755463 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="extract-utilities" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.755984 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42db07c-99b8-48ab-8034-df5a97d071a1" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.756156 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c076be-89dc-4cf9-8c72-da940b015ab8" containerName="registry-server" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.761104 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.765118 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.858327 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.858818 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.859028 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqx6\" (UniqueName: \"kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.960834 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.960964 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.961048 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqx6\" (UniqueName: \"kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.961449 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.961466 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:15 crc kubenswrapper[4991]: I0929 11:26:15.982741 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqx6\" (UniqueName: \"kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6\") pod \"redhat-operators-zg7lh\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:16 crc kubenswrapper[4991]: I0929 11:26:16.097384 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:16 crc kubenswrapper[4991]: I0929 11:26:16.597014 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:17 crc kubenswrapper[4991]: I0929 11:26:17.559798 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a75af86-a242-48fa-9ad7-985b2b190679" containerID="ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada" exitCode=0 Sep 29 11:26:17 crc kubenswrapper[4991]: I0929 11:26:17.559856 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerDied","Data":"ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada"} Sep 29 11:26:17 crc kubenswrapper[4991]: I0929 11:26:17.560344 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerStarted","Data":"45432632e6987d1522ebd4e9ed87ca71162c448789772f89138933d4ac4bce8a"} Sep 29 11:26:18 crc kubenswrapper[4991]: I0929 11:26:18.574235 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerStarted","Data":"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c"} Sep 29 11:26:20 crc kubenswrapper[4991]: I0929 11:26:20.601577 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a75af86-a242-48fa-9ad7-985b2b190679" containerID="3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c" exitCode=0 Sep 29 11:26:20 crc kubenswrapper[4991]: I0929 11:26:20.601689 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerDied","Data":"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c"} Sep 29 11:26:22 crc kubenswrapper[4991]: I0929 11:26:22.632637 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerStarted","Data":"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1"} Sep 29 11:26:22 crc kubenswrapper[4991]: I0929 11:26:22.663535 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zg7lh" podStartSLOduration=4.110382497 podStartE2EDuration="7.663509058s" podCreationTimestamp="2025-09-29 11:26:15 +0000 UTC" firstStartedPulling="2025-09-29 11:26:17.562511457 +0000 UTC m=+6513.418439485" lastFinishedPulling="2025-09-29 11:26:21.115638018 +0000 UTC m=+6516.971566046" observedRunningTime="2025-09-29 11:26:22.663253552 +0000 UTC m=+6518.519181590" watchObservedRunningTime="2025-09-29 11:26:22.663509058 +0000 UTC m=+6518.519437086" Sep 29 11:26:26 crc kubenswrapper[4991]: I0929 11:26:26.098382 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:26 crc kubenswrapper[4991]: I0929 11:26:26.098942 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:27 crc kubenswrapper[4991]: I0929 11:26:27.148312 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg7lh" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="registry-server" probeResult="failure" output=< Sep 29 11:26:27 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:26:27 crc kubenswrapper[4991]: > Sep 29 11:26:36 crc kubenswrapper[4991]: I0929 11:26:36.149902 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:36 crc kubenswrapper[4991]: I0929 11:26:36.214761 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:36 crc kubenswrapper[4991]: I0929 11:26:36.385414 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:37 crc kubenswrapper[4991]: I0929 11:26:37.827277 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zg7lh" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="registry-server" containerID="cri-o://d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1" gracePeriod=2 Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.390675 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.428798 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcqx6\" (UniqueName: \"kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6\") pod \"7a75af86-a242-48fa-9ad7-985b2b190679\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.428933 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content\") pod \"7a75af86-a242-48fa-9ad7-985b2b190679\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.429790 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities\") pod \"7a75af86-a242-48fa-9ad7-985b2b190679\" (UID: \"7a75af86-a242-48fa-9ad7-985b2b190679\") " Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.431707 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities" (OuterVolumeSpecName: "utilities") pod "7a75af86-a242-48fa-9ad7-985b2b190679" (UID: "7a75af86-a242-48fa-9ad7-985b2b190679"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.435833 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6" (OuterVolumeSpecName: "kube-api-access-kcqx6") pod "7a75af86-a242-48fa-9ad7-985b2b190679" (UID: "7a75af86-a242-48fa-9ad7-985b2b190679"). InnerVolumeSpecName "kube-api-access-kcqx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.514323 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a75af86-a242-48fa-9ad7-985b2b190679" (UID: "7a75af86-a242-48fa-9ad7-985b2b190679"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.533762 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcqx6\" (UniqueName: \"kubernetes.io/projected/7a75af86-a242-48fa-9ad7-985b2b190679-kube-api-access-kcqx6\") on node \"crc\" DevicePath \"\"" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.533812 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.533822 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a75af86-a242-48fa-9ad7-985b2b190679-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.845412 4991 generic.go:334] "Generic (PLEG): container finished" podID="7a75af86-a242-48fa-9ad7-985b2b190679" containerID="d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1" exitCode=0 Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.845473 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerDied","Data":"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1"} Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.845481 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg7lh" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.845527 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg7lh" event={"ID":"7a75af86-a242-48fa-9ad7-985b2b190679","Type":"ContainerDied","Data":"45432632e6987d1522ebd4e9ed87ca71162c448789772f89138933d4ac4bce8a"} Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.845556 4991 scope.go:117] "RemoveContainer" containerID="d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.888665 4991 scope.go:117] "RemoveContainer" containerID="3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.889425 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.906397 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zg7lh"] Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.932273 4991 scope.go:117] "RemoveContainer" containerID="ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.950434 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" path="/var/lib/kubelet/pods/7a75af86-a242-48fa-9ad7-985b2b190679/volumes" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.998292 4991 scope.go:117] "RemoveContainer" containerID="d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1" Sep 29 11:26:38 crc kubenswrapper[4991]: E0929 11:26:38.998788 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1\": container with ID starting with d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1 not found: ID does not exist" containerID="d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.998829 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1"} err="failed to get container status \"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1\": rpc error: code = NotFound desc = could not find container \"d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1\": container with ID starting with d1850d2e954979b50d35dbdce304030e4716789cdc1989a45141c4f7eb1f4ed1 not found: ID does not exist" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.998861 4991 scope.go:117] "RemoveContainer" containerID="3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c" Sep 29 11:26:38 crc kubenswrapper[4991]: E0929 11:26:38.999132 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c\": container with ID starting with 3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c not found: ID does not exist" containerID="3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.999157 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c"} err="failed to get container status \"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c\": rpc error: code = NotFound desc = could not find container \"3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c\": container with ID starting with 3e8d4ec42a7389223aae87ea6d26aca57d68c8774205ce5fdcf471ef603ae21c not found: ID does not exist" Sep 29 11:26:38 crc kubenswrapper[4991]: I0929 11:26:38.999177 4991 scope.go:117] "RemoveContainer" containerID="ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada" Sep 29 11:26:39 crc kubenswrapper[4991]: E0929 11:26:39.000281 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada\": container with ID starting with ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada not found: ID does not exist" containerID="ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada" Sep 29 11:26:39 crc kubenswrapper[4991]: I0929 11:26:39.000307 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada"} err="failed to get container status \"ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada\": rpc error: code = NotFound desc = could not find container \"ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada\": container with ID starting with ebe9cfe4b217fec3d3bf00660bce506b6f2e2f47256b6785ccca0d8482074ada not found: ID does not exist" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.719317 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:27:50 crc kubenswrapper[4991]: E0929 11:27:50.720569 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="registry-server" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.720587 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="registry-server" Sep 29 11:27:50 crc kubenswrapper[4991]: E0929 11:27:50.720617 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="extract-utilities" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.720627 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="extract-utilities" Sep 29 11:27:50 crc kubenswrapper[4991]: E0929 11:27:50.720681 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="extract-content" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.720689 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="extract-content" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.721020 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a75af86-a242-48fa-9ad7-985b2b190679" containerName="registry-server" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.723128 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.731126 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.822515 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.822893 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.823028 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7v6\" (UniqueName: \"kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.924928 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.925030 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.925084 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs7v6\" (UniqueName: \"kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.925917 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.925935 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:50 crc kubenswrapper[4991]: I0929 11:27:50.948834 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs7v6\" (UniqueName: \"kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6\") pod \"community-operators-v5c9d\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:51 crc kubenswrapper[4991]: I0929 11:27:51.051136 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:27:51 crc kubenswrapper[4991]: I0929 11:27:51.623291 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:27:51 crc kubenswrapper[4991]: I0929 11:27:51.702166 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerStarted","Data":"22d5286ce3d2a8a89963578471ebcc907905c7176be5820cbfc675d02267cb2a"} Sep 29 11:27:52 crc kubenswrapper[4991]: I0929 11:27:52.715329 4991 generic.go:334] "Generic (PLEG): container finished" podID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerID="88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426" exitCode=0 Sep 29 11:27:52 crc kubenswrapper[4991]: I0929 11:27:52.715432 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerDied","Data":"88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426"} Sep 29 11:27:54 crc kubenswrapper[4991]: I0929 11:27:54.740314 4991 generic.go:334] "Generic (PLEG): container finished" podID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerID="e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a" exitCode=0 Sep 29 11:27:54 crc kubenswrapper[4991]: I0929 11:27:54.740393 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerDied","Data":"e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a"} Sep 29 11:27:55 crc kubenswrapper[4991]: I0929 11:27:55.753709 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerStarted","Data":"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b"} Sep 29 11:27:55 crc kubenswrapper[4991]: I0929 11:27:55.779646 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5c9d" podStartSLOduration=3.306091636 podStartE2EDuration="5.779617812s" podCreationTimestamp="2025-09-29 11:27:50 +0000 UTC" firstStartedPulling="2025-09-29 11:27:52.717515748 +0000 UTC m=+6608.573443776" lastFinishedPulling="2025-09-29 11:27:55.191041924 +0000 UTC m=+6611.046969952" observedRunningTime="2025-09-29 11:27:55.769853868 +0000 UTC m=+6611.625781916" watchObservedRunningTime="2025-09-29 11:27:55.779617812 +0000 UTC m=+6611.635545850" Sep 29 11:28:01 crc kubenswrapper[4991]: I0929 11:28:01.052234 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:01 crc kubenswrapper[4991]: I0929 11:28:01.052730 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:01 crc kubenswrapper[4991]: I0929 11:28:01.102968 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:01 crc kubenswrapper[4991]: I0929 11:28:01.872004 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:01 crc kubenswrapper[4991]: I0929 11:28:01.936102 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:28:03 crc kubenswrapper[4991]: I0929 11:28:03.842453 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5c9d" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="registry-server" containerID="cri-o://3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b" gracePeriod=2 Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.345658 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.544449 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs7v6\" (UniqueName: \"kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6\") pod \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.544526 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content\") pod \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.544704 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities\") pod \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\" (UID: \"520cd9f5-7c9d-4298-91b5-8a84e7682cfd\") " Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.545848 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities" (OuterVolumeSpecName: "utilities") pod "520cd9f5-7c9d-4298-91b5-8a84e7682cfd" (UID: "520cd9f5-7c9d-4298-91b5-8a84e7682cfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.554165 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6" (OuterVolumeSpecName: "kube-api-access-vs7v6") pod "520cd9f5-7c9d-4298-91b5-8a84e7682cfd" (UID: "520cd9f5-7c9d-4298-91b5-8a84e7682cfd"). InnerVolumeSpecName "kube-api-access-vs7v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.648800 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs7v6\" (UniqueName: \"kubernetes.io/projected/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-kube-api-access-vs7v6\") on node \"crc\" DevicePath \"\"" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.648870 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.854087 4991 generic.go:334] "Generic (PLEG): container finished" podID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerID="3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b" exitCode=0 Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.854136 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerDied","Data":"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b"} Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.854167 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5c9d" event={"ID":"520cd9f5-7c9d-4298-91b5-8a84e7682cfd","Type":"ContainerDied","Data":"22d5286ce3d2a8a89963578471ebcc907905c7176be5820cbfc675d02267cb2a"} Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.854191 4991 scope.go:117] "RemoveContainer" containerID="3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.857221 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5c9d" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.876693 4991 scope.go:117] "RemoveContainer" containerID="e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.910035 4991 scope.go:117] "RemoveContainer" containerID="88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.941816 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "520cd9f5-7c9d-4298-91b5-8a84e7682cfd" (UID: "520cd9f5-7c9d-4298-91b5-8a84e7682cfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.955110 4991 scope.go:117] "RemoveContainer" containerID="3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.955850 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/520cd9f5-7c9d-4298-91b5-8a84e7682cfd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:28:04 crc kubenswrapper[4991]: E0929 11:28:04.955942 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b\": container with ID starting with 3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b not found: ID does not exist" containerID="3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.956038 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b"} err="failed to get container status \"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b\": rpc error: code = NotFound desc = could not find container \"3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b\": container with ID starting with 3cefc0920f0a9fb5f2286da36a0911c885a1cc7bb140c110572679155c79960b not found: ID does not exist" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.956076 4991 scope.go:117] "RemoveContainer" containerID="e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a" Sep 29 11:28:04 crc kubenswrapper[4991]: E0929 11:28:04.956562 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a\": container with ID starting with e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a not found: ID does not exist" containerID="e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.956592 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a"} err="failed to get container status \"e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a\": rpc error: code = NotFound desc = could not find container \"e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a\": container with ID starting with e19dc1d288aa0be8717e341ed36d0535d22eb5fd55fac8a49537494a42d0409a not found: ID does not exist" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.956611 4991 scope.go:117] "RemoveContainer" containerID="88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426" Sep 29 11:28:04 crc kubenswrapper[4991]: E0929 11:28:04.957060 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426\": container with ID starting with 88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426 not found: ID does not exist" containerID="88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426" Sep 29 11:28:04 crc kubenswrapper[4991]: I0929 11:28:04.957083 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426"} err="failed to get container status \"88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426\": rpc error: code = NotFound desc = could not find container \"88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426\": container with ID starting with 88362a3017ab34debba5439c5154f5baa8dd1d50dfff9561919224fe44fb2426 not found: ID does not exist" Sep 29 11:28:05 crc kubenswrapper[4991]: I0929 11:28:05.182131 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:28:05 crc kubenswrapper[4991]: I0929 11:28:05.192210 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5c9d"] Sep 29 11:28:06 crc kubenswrapper[4991]: I0929 11:28:06.941030 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" path="/var/lib/kubelet/pods/520cd9f5-7c9d-4298-91b5-8a84e7682cfd/volumes" Sep 29 11:28:37 crc kubenswrapper[4991]: I0929 11:28:37.946468 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:28:37 crc kubenswrapper[4991]: I0929 11:28:37.947110 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:29:07 crc kubenswrapper[4991]: I0929 11:29:07.947332 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:29:07 crc kubenswrapper[4991]: I0929 11:29:07.947837 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:29:37 crc kubenswrapper[4991]: I0929 11:29:37.946570 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:29:37 crc kubenswrapper[4991]: I0929 11:29:37.947194 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:29:37 crc kubenswrapper[4991]: I0929 11:29:37.947238 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:29:37 crc kubenswrapper[4991]: I0929 11:29:37.948184 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:29:37 crc kubenswrapper[4991]: I0929 11:29:37.948246 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" gracePeriod=600 Sep 29 11:29:38 crc kubenswrapper[4991]: E0929 11:29:38.087508 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:29:38 crc kubenswrapper[4991]: I0929 11:29:38.969081 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" exitCode=0 Sep 29 11:29:38 crc kubenswrapper[4991]: I0929 11:29:38.969184 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a"} Sep 29 11:29:38 crc kubenswrapper[4991]: I0929 11:29:38.969457 4991 scope.go:117] "RemoveContainer" containerID="7ab8ac3466dd7238b159a6a209f7b6c5b8422e3add8ac8e18b4c36ae64f8d300" Sep 29 11:29:38 crc kubenswrapper[4991]: I0929 11:29:38.970398 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:29:38 crc kubenswrapper[4991]: E0929 11:29:38.971113 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:29:52 crc kubenswrapper[4991]: I0929 11:29:52.926622 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:29:52 crc kubenswrapper[4991]: E0929 11:29:52.927506 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.186140 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr"] Sep 29 11:30:00 crc kubenswrapper[4991]: E0929 11:30:00.187213 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="extract-utilities" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.187229 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="extract-utilities" Sep 29 11:30:00 crc kubenswrapper[4991]: E0929 11:30:00.187278 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="extract-content" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.187284 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="extract-content" Sep 29 11:30:00 crc kubenswrapper[4991]: E0929 11:30:00.187309 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="registry-server" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.187316 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="registry-server" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.187536 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="520cd9f5-7c9d-4298-91b5-8a84e7682cfd" containerName="registry-server" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.188400 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.191244 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.192825 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.199732 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr"] Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.289014 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78r4p\" (UniqueName: \"kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.289077 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.289215 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.391838 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78r4p\" (UniqueName: \"kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.391894 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.392001 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.393189 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.401110 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.409902 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78r4p\" (UniqueName: \"kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p\") pod \"collect-profiles-29319090-45cxr\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:00 crc kubenswrapper[4991]: I0929 11:30:00.518766 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:01 crc kubenswrapper[4991]: I0929 11:30:01.018065 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr"] Sep 29 11:30:01 crc kubenswrapper[4991]: I0929 11:30:01.226583 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" event={"ID":"e5d003c7-668f-4b1a-8b82-4fd52ee26974","Type":"ContainerStarted","Data":"e9a793874cca47cab3e634d4fc10b6e1cb364b1905997706fab412233ab30cdc"} Sep 29 11:30:01 crc kubenswrapper[4991]: I0929 11:30:01.227795 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" event={"ID":"e5d003c7-668f-4b1a-8b82-4fd52ee26974","Type":"ContainerStarted","Data":"9d9043c70d2bb31aed4edf5310d45dcf1073759d182d21b3ffb867d1aa709b20"} Sep 29 11:30:01 crc kubenswrapper[4991]: I0929 11:30:01.244938 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" podStartSLOduration=1.24491312 podStartE2EDuration="1.24491312s" podCreationTimestamp="2025-09-29 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 11:30:01.241564083 +0000 UTC m=+6737.097492111" watchObservedRunningTime="2025-09-29 11:30:01.24491312 +0000 UTC m=+6737.100841148" Sep 29 11:30:02 crc kubenswrapper[4991]: I0929 11:30:02.238618 4991 generic.go:334] "Generic (PLEG): container finished" podID="e5d003c7-668f-4b1a-8b82-4fd52ee26974" containerID="e9a793874cca47cab3e634d4fc10b6e1cb364b1905997706fab412233ab30cdc" exitCode=0 Sep 29 11:30:02 crc kubenswrapper[4991]: I0929 11:30:02.239140 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" event={"ID":"e5d003c7-668f-4b1a-8b82-4fd52ee26974","Type":"ContainerDied","Data":"e9a793874cca47cab3e634d4fc10b6e1cb364b1905997706fab412233ab30cdc"} Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.721425 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.773355 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume\") pod \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.773586 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume\") pod \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.773660 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78r4p\" (UniqueName: \"kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p\") pod \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\" (UID: \"e5d003c7-668f-4b1a-8b82-4fd52ee26974\") " Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.774214 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5d003c7-668f-4b1a-8b82-4fd52ee26974" (UID: "e5d003c7-668f-4b1a-8b82-4fd52ee26974"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.785957 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5d003c7-668f-4b1a-8b82-4fd52ee26974" (UID: "e5d003c7-668f-4b1a-8b82-4fd52ee26974"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.786190 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p" (OuterVolumeSpecName: "kube-api-access-78r4p") pod "e5d003c7-668f-4b1a-8b82-4fd52ee26974" (UID: "e5d003c7-668f-4b1a-8b82-4fd52ee26974"). InnerVolumeSpecName "kube-api-access-78r4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.876278 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5d003c7-668f-4b1a-8b82-4fd52ee26974-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.876320 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78r4p\" (UniqueName: \"kubernetes.io/projected/e5d003c7-668f-4b1a-8b82-4fd52ee26974-kube-api-access-78r4p\") on node \"crc\" DevicePath \"\"" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.876333 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d003c7-668f-4b1a-8b82-4fd52ee26974-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:30:03 crc kubenswrapper[4991]: I0929 11:30:03.926891 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:30:03 crc kubenswrapper[4991]: E0929 11:30:03.927172 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.269558 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" event={"ID":"e5d003c7-668f-4b1a-8b82-4fd52ee26974","Type":"ContainerDied","Data":"9d9043c70d2bb31aed4edf5310d45dcf1073759d182d21b3ffb867d1aa709b20"} Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.269807 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9043c70d2bb31aed4edf5310d45dcf1073759d182d21b3ffb867d1aa709b20" Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.269906 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr" Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.334474 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc"] Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.350394 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319045-wh8bc"] Sep 29 11:30:04 crc kubenswrapper[4991]: I0929 11:30:04.941895 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf30f60-fb14-43ee-b53f-29f9a37f45a3" path="/var/lib/kubelet/pods/ccf30f60-fb14-43ee-b53f-29f9a37f45a3/volumes" Sep 29 11:30:16 crc kubenswrapper[4991]: I0929 11:30:16.932001 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:30:16 crc kubenswrapper[4991]: E0929 11:30:16.932964 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:30:26 crc kubenswrapper[4991]: I0929 11:30:26.076492 4991 scope.go:117] "RemoveContainer" containerID="41b4b3bf31b5ce3ce5e742dffb5ea228a07c7d063e93dc46886ac7e565d78ea5" Sep 29 11:30:27 crc kubenswrapper[4991]: I0929 11:30:27.926460 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:30:27 crc kubenswrapper[4991]: E0929 11:30:27.927308 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:30:40 crc kubenswrapper[4991]: I0929 11:30:40.927290 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:30:40 crc kubenswrapper[4991]: E0929 11:30:40.928229 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:30:53 crc kubenswrapper[4991]: I0929 11:30:53.926592 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:30:53 crc kubenswrapper[4991]: E0929 11:30:53.927389 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:31:07 crc kubenswrapper[4991]: I0929 11:31:07.927460 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:31:07 crc kubenswrapper[4991]: E0929 11:31:07.928637 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:31:20 crc kubenswrapper[4991]: I0929 11:31:20.927269 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:31:20 crc kubenswrapper[4991]: E0929 11:31:20.928641 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:31:35 crc kubenswrapper[4991]: I0929 11:31:35.927013 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:31:35 crc kubenswrapper[4991]: E0929 11:31:35.927835 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:31:50 crc kubenswrapper[4991]: I0929 11:31:50.927105 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:31:50 crc kubenswrapper[4991]: E0929 11:31:50.928745 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:32:03 crc kubenswrapper[4991]: I0929 11:32:03.926118 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:32:03 crc kubenswrapper[4991]: E0929 11:32:03.927007 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:32:17 crc kubenswrapper[4991]: I0929 11:32:17.927653 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:32:17 crc kubenswrapper[4991]: E0929 11:32:17.928817 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:32:29 crc kubenswrapper[4991]: I0929 11:32:29.926508 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:32:29 crc kubenswrapper[4991]: E0929 11:32:29.928518 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:32:40 crc kubenswrapper[4991]: I0929 11:32:40.926288 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:32:40 crc kubenswrapper[4991]: E0929 11:32:40.927176 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:32:51 crc kubenswrapper[4991]: I0929 11:32:51.926372 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:32:51 crc kubenswrapper[4991]: E0929 11:32:51.927150 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:33:06 crc kubenswrapper[4991]: I0929 11:33:06.928164 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:33:06 crc kubenswrapper[4991]: E0929 11:33:06.929325 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:33:17 crc kubenswrapper[4991]: I0929 11:33:17.926800 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:33:17 crc kubenswrapper[4991]: E0929 11:33:17.927793 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:33:29 crc kubenswrapper[4991]: I0929 11:33:29.926418 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:33:29 crc kubenswrapper[4991]: E0929 11:33:29.927230 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:33:44 crc kubenswrapper[4991]: I0929 11:33:44.936152 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:33:44 crc kubenswrapper[4991]: E0929 11:33:44.937003 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:33:55 crc kubenswrapper[4991]: I0929 11:33:55.926567 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:33:55 crc kubenswrapper[4991]: E0929 11:33:55.927325 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:34:08 crc kubenswrapper[4991]: I0929 11:34:08.928023 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:34:08 crc kubenswrapper[4991]: E0929 11:34:08.929661 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.631504 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:11 crc kubenswrapper[4991]: E0929 11:34:11.632469 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d003c7-668f-4b1a-8b82-4fd52ee26974" containerName="collect-profiles" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.632484 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d003c7-668f-4b1a-8b82-4fd52ee26974" containerName="collect-profiles" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.632839 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d003c7-668f-4b1a-8b82-4fd52ee26974" containerName="collect-profiles" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.634765 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.653746 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.727336 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8t7\" (UniqueName: \"kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.727597 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.728709 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.830687 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8t7\" (UniqueName: \"kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.830847 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.831171 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.831446 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.831621 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.854259 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8t7\" (UniqueName: \"kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7\") pod \"certified-operators-k99z6\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:11 crc kubenswrapper[4991]: I0929 11:34:11.960782 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:12 crc kubenswrapper[4991]: I0929 11:34:12.583555 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:13 crc kubenswrapper[4991]: I0929 11:34:13.151563 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerID="f2d168a382664f158f68cee743e842ac9ec0900b4d67919b9aa889c4b719d683" exitCode=0 Sep 29 11:34:13 crc kubenswrapper[4991]: I0929 11:34:13.151643 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerDied","Data":"f2d168a382664f158f68cee743e842ac9ec0900b4d67919b9aa889c4b719d683"} Sep 29 11:34:13 crc kubenswrapper[4991]: I0929 11:34:13.153544 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerStarted","Data":"6afb221e409f3fb8d476b51caeb7aac0b7d8908729d60f345e383d3614c87f2f"} Sep 29 11:34:13 crc kubenswrapper[4991]: I0929 11:34:13.154622 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:34:15 crc kubenswrapper[4991]: I0929 11:34:15.184870 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerStarted","Data":"5ed9714785ed2b386acc0164f4e457d74c507f94421939a2bc2a7297f67e3b90"} Sep 29 11:34:16 crc kubenswrapper[4991]: I0929 11:34:16.202527 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerID="5ed9714785ed2b386acc0164f4e457d74c507f94421939a2bc2a7297f67e3b90" exitCode=0 Sep 29 11:34:16 crc kubenswrapper[4991]: I0929 11:34:16.202614 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerDied","Data":"5ed9714785ed2b386acc0164f4e457d74c507f94421939a2bc2a7297f67e3b90"} Sep 29 11:34:17 crc kubenswrapper[4991]: I0929 11:34:17.217874 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerStarted","Data":"1841e849f6655d7746e4503884a0de175fe522b82cf8fbd47d56036d8cf651e8"} Sep 29 11:34:17 crc kubenswrapper[4991]: I0929 11:34:17.246895 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k99z6" podStartSLOduration=2.520998087 podStartE2EDuration="6.246870917s" podCreationTimestamp="2025-09-29 11:34:11 +0000 UTC" firstStartedPulling="2025-09-29 11:34:13.15435081 +0000 UTC m=+6989.010278838" lastFinishedPulling="2025-09-29 11:34:16.88022364 +0000 UTC m=+6992.736151668" observedRunningTime="2025-09-29 11:34:17.238107659 +0000 UTC m=+6993.094035697" watchObservedRunningTime="2025-09-29 11:34:17.246870917 +0000 UTC m=+6993.102798945" Sep 29 11:34:21 crc kubenswrapper[4991]: I0929 11:34:21.961284 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:21 crc kubenswrapper[4991]: I0929 11:34:21.961841 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:22 crc kubenswrapper[4991]: I0929 11:34:22.018802 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:22 crc kubenswrapper[4991]: I0929 11:34:22.324622 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:22 crc kubenswrapper[4991]: I0929 11:34:22.379454 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:22 crc kubenswrapper[4991]: I0929 11:34:22.926649 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:34:22 crc kubenswrapper[4991]: E0929 11:34:22.927010 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.296212 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k99z6" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="registry-server" containerID="cri-o://1841e849f6655d7746e4503884a0de175fe522b82cf8fbd47d56036d8cf651e8" gracePeriod=2 Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.695055 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.699227 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.711264 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.878734 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.878817 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.878868 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlqg\" (UniqueName: \"kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.982078 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.982162 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.982212 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlqg\" (UniqueName: \"kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.982711 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:24 crc kubenswrapper[4991]: I0929 11:34:24.982726 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.005819 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlqg\" (UniqueName: \"kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg\") pod \"redhat-marketplace-rjs7h\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.029405 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.309860 4991 generic.go:334] "Generic (PLEG): container finished" podID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerID="1841e849f6655d7746e4503884a0de175fe522b82cf8fbd47d56036d8cf651e8" exitCode=0 Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.310030 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerDied","Data":"1841e849f6655d7746e4503884a0de175fe522b82cf8fbd47d56036d8cf651e8"} Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.404442 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.495827 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities\") pod \"2f71628c-9002-4d1d-b4fa-5a6869b75340\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.496128 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx8t7\" (UniqueName: \"kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7\") pod \"2f71628c-9002-4d1d-b4fa-5a6869b75340\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.496195 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content\") pod \"2f71628c-9002-4d1d-b4fa-5a6869b75340\" (UID: \"2f71628c-9002-4d1d-b4fa-5a6869b75340\") " Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.497697 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities" (OuterVolumeSpecName: "utilities") pod "2f71628c-9002-4d1d-b4fa-5a6869b75340" (UID: "2f71628c-9002-4d1d-b4fa-5a6869b75340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.500871 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7" (OuterVolumeSpecName: "kube-api-access-tx8t7") pod "2f71628c-9002-4d1d-b4fa-5a6869b75340" (UID: "2f71628c-9002-4d1d-b4fa-5a6869b75340"). InnerVolumeSpecName "kube-api-access-tx8t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.551284 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f71628c-9002-4d1d-b4fa-5a6869b75340" (UID: "2f71628c-9002-4d1d-b4fa-5a6869b75340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.581255 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:25 crc kubenswrapper[4991]: W0929 11:34:25.583215 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02d3243_6049_4942_9127_32cfe23a6837.slice/crio-f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83 WatchSource:0}: Error finding container f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83: Status 404 returned error can't find the container with id f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83 Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.598649 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.598679 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f71628c-9002-4d1d-b4fa-5a6869b75340-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:25 crc kubenswrapper[4991]: I0929 11:34:25.598691 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx8t7\" (UniqueName: \"kubernetes.io/projected/2f71628c-9002-4d1d-b4fa-5a6869b75340-kube-api-access-tx8t7\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.328232 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k99z6" event={"ID":"2f71628c-9002-4d1d-b4fa-5a6869b75340","Type":"ContainerDied","Data":"6afb221e409f3fb8d476b51caeb7aac0b7d8908729d60f345e383d3614c87f2f"} Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.328272 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k99z6" Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.328650 4991 scope.go:117] "RemoveContainer" containerID="1841e849f6655d7746e4503884a0de175fe522b82cf8fbd47d56036d8cf651e8" Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.330674 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerStarted","Data":"f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83"} Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.380172 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.393143 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k99z6"] Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.423629 4991 scope.go:117] "RemoveContainer" containerID="5ed9714785ed2b386acc0164f4e457d74c507f94421939a2bc2a7297f67e3b90" Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.450426 4991 scope.go:117] "RemoveContainer" containerID="f2d168a382664f158f68cee743e842ac9ec0900b4d67919b9aa889c4b719d683" Sep 29 11:34:26 crc kubenswrapper[4991]: I0929 11:34:26.948229 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" path="/var/lib/kubelet/pods/2f71628c-9002-4d1d-b4fa-5a6869b75340/volumes" Sep 29 11:34:27 crc kubenswrapper[4991]: I0929 11:34:27.346389 4991 generic.go:334] "Generic (PLEG): container finished" podID="a02d3243-6049-4942-9127-32cfe23a6837" containerID="da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f" exitCode=0 Sep 29 11:34:27 crc kubenswrapper[4991]: I0929 11:34:27.346451 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerDied","Data":"da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f"} Sep 29 11:34:30 crc kubenswrapper[4991]: I0929 11:34:30.388218 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerStarted","Data":"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df"} Sep 29 11:34:31 crc kubenswrapper[4991]: I0929 11:34:31.404704 4991 generic.go:334] "Generic (PLEG): container finished" podID="a02d3243-6049-4942-9127-32cfe23a6837" containerID="c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df" exitCode=0 Sep 29 11:34:31 crc kubenswrapper[4991]: I0929 11:34:31.404834 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerDied","Data":"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df"} Sep 29 11:34:34 crc kubenswrapper[4991]: I0929 11:34:34.445246 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerStarted","Data":"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c"} Sep 29 11:34:34 crc kubenswrapper[4991]: I0929 11:34:34.473674 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjs7h" podStartSLOduration=4.131401016 podStartE2EDuration="10.473655022s" podCreationTimestamp="2025-09-29 11:34:24 +0000 UTC" firstStartedPulling="2025-09-29 11:34:27.349442892 +0000 UTC m=+7003.205370920" lastFinishedPulling="2025-09-29 11:34:33.691696898 +0000 UTC m=+7009.547624926" observedRunningTime="2025-09-29 11:34:34.465784588 +0000 UTC m=+7010.321712626" watchObservedRunningTime="2025-09-29 11:34:34.473655022 +0000 UTC m=+7010.329583050" Sep 29 11:34:34 crc kubenswrapper[4991]: I0929 11:34:34.937217 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:34:34 crc kubenswrapper[4991]: E0929 11:34:34.937819 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:34:35 crc kubenswrapper[4991]: I0929 11:34:35.029526 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:35 crc kubenswrapper[4991]: I0929 11:34:35.029576 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:35 crc kubenswrapper[4991]: I0929 11:34:35.085636 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:45 crc kubenswrapper[4991]: I0929 11:34:45.085476 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:45 crc kubenswrapper[4991]: I0929 11:34:45.141393 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:45 crc kubenswrapper[4991]: I0929 11:34:45.602165 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjs7h" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="registry-server" containerID="cri-o://72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c" gracePeriod=2 Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.184207 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.246229 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content\") pod \"a02d3243-6049-4942-9127-32cfe23a6837\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.246401 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities\") pod \"a02d3243-6049-4942-9127-32cfe23a6837\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.246503 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhlqg\" (UniqueName: \"kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg\") pod \"a02d3243-6049-4942-9127-32cfe23a6837\" (UID: \"a02d3243-6049-4942-9127-32cfe23a6837\") " Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.248917 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities" (OuterVolumeSpecName: "utilities") pod "a02d3243-6049-4942-9127-32cfe23a6837" (UID: "a02d3243-6049-4942-9127-32cfe23a6837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.252880 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg" (OuterVolumeSpecName: "kube-api-access-xhlqg") pod "a02d3243-6049-4942-9127-32cfe23a6837" (UID: "a02d3243-6049-4942-9127-32cfe23a6837"). InnerVolumeSpecName "kube-api-access-xhlqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.263945 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a02d3243-6049-4942-9127-32cfe23a6837" (UID: "a02d3243-6049-4942-9127-32cfe23a6837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.349128 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhlqg\" (UniqueName: \"kubernetes.io/projected/a02d3243-6049-4942-9127-32cfe23a6837-kube-api-access-xhlqg\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.349155 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.349165 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d3243-6049-4942-9127-32cfe23a6837-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.614997 4991 generic.go:334] "Generic (PLEG): container finished" podID="a02d3243-6049-4942-9127-32cfe23a6837" containerID="72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c" exitCode=0 Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.615047 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerDied","Data":"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c"} Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.615076 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjs7h" event={"ID":"a02d3243-6049-4942-9127-32cfe23a6837","Type":"ContainerDied","Data":"f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83"} Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.615098 4991 scope.go:117] "RemoveContainer" containerID="72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.615111 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjs7h" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.651757 4991 scope.go:117] "RemoveContainer" containerID="c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.653907 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.666031 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjs7h"] Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.678165 4991 scope.go:117] "RemoveContainer" containerID="da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.738881 4991 scope.go:117] "RemoveContainer" containerID="72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c" Sep 29 11:34:46 crc kubenswrapper[4991]: E0929 11:34:46.739429 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c\": container with ID starting with 72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c not found: ID does not exist" containerID="72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.739475 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c"} err="failed to get container status \"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c\": rpc error: code = NotFound desc = could not find container \"72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c\": container with ID starting with 72d0660e4a2023cf422486634c75f27bda6430f484417342a01f092133d6bb0c not found: ID does not exist" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.739503 4991 scope.go:117] "RemoveContainer" containerID="c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df" Sep 29 11:34:46 crc kubenswrapper[4991]: E0929 11:34:46.739747 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df\": container with ID starting with c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df not found: ID does not exist" containerID="c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.739783 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df"} err="failed to get container status \"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df\": rpc error: code = NotFound desc = could not find container \"c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df\": container with ID starting with c611c60a81c08e39fe275b4f48e59e4ca37546045b10f72c96b03680c1b815df not found: ID does not exist" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.739802 4991 scope.go:117] "RemoveContainer" containerID="da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f" Sep 29 11:34:46 crc kubenswrapper[4991]: E0929 11:34:46.740074 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f\": container with ID starting with da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f not found: ID does not exist" containerID="da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.740106 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f"} err="failed to get container status \"da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f\": rpc error: code = NotFound desc = could not find container \"da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f\": container with ID starting with da003e1da51f6654d24b5e2146eec27acd275f5c62e0e62aee174349d79de73f not found: ID does not exist" Sep 29 11:34:46 crc kubenswrapper[4991]: E0929 11:34:46.858601 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02d3243_6049_4942_9127_32cfe23a6837.slice/crio-f011a225c1aedc46f7b36012d9eb9434b99d8f68810cf98c2ad0a290ec7b8c83\": RecentStats: unable to find data in memory cache]" Sep 29 11:34:46 crc kubenswrapper[4991]: I0929 11:34:46.939365 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02d3243-6049-4942-9127-32cfe23a6837" path="/var/lib/kubelet/pods/a02d3243-6049-4942-9127-32cfe23a6837/volumes" Sep 29 11:34:47 crc kubenswrapper[4991]: I0929 11:34:47.926822 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:34:48 crc kubenswrapper[4991]: I0929 11:34:48.642248 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352"} Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.121929 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123094 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123124 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123135 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="extract-content" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123143 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="extract-content" Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123156 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="extract-utilities" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123166 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="extract-utilities" Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123197 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123207 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123219 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="extract-content" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123225 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="extract-content" Sep 29 11:36:55 crc kubenswrapper[4991]: E0929 11:36:55.123260 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="extract-utilities" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.123269 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="extract-utilities" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.124171 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f71628c-9002-4d1d-b4fa-5a6869b75340" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.124207 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02d3243-6049-4942-9127-32cfe23a6837" containerName="registry-server" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.126651 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.151100 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.181311 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.181386 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.181423 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqmp\" (UniqueName: \"kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.284106 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.284218 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.284269 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqmp\" (UniqueName: \"kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.284664 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.284698 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.316275 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqmp\" (UniqueName: \"kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp\") pod \"redhat-operators-kmcdm\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.451655 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:36:55 crc kubenswrapper[4991]: I0929 11:36:55.988415 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:36:56 crc kubenswrapper[4991]: I0929 11:36:56.111002 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerStarted","Data":"e71c3bbaac95f06573a551ad7fff308f3d4fa588126ab1af3f3d7b2ebf5c8879"} Sep 29 11:36:57 crc kubenswrapper[4991]: I0929 11:36:57.131011 4991 generic.go:334] "Generic (PLEG): container finished" podID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerID="37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143" exitCode=0 Sep 29 11:36:57 crc kubenswrapper[4991]: I0929 11:36:57.131112 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerDied","Data":"37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143"} Sep 29 11:36:58 crc kubenswrapper[4991]: I0929 11:36:58.145169 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerStarted","Data":"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6"} Sep 29 11:37:00 crc kubenswrapper[4991]: I0929 11:37:00.172199 4991 generic.go:334] "Generic (PLEG): container finished" podID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerID="9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6" exitCode=0 Sep 29 11:37:00 crc kubenswrapper[4991]: I0929 11:37:00.172310 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerDied","Data":"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6"} Sep 29 11:37:05 crc kubenswrapper[4991]: I0929 11:37:05.225634 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerStarted","Data":"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa"} Sep 29 11:37:05 crc kubenswrapper[4991]: I0929 11:37:05.248095 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kmcdm" podStartSLOduration=3.136936448 podStartE2EDuration="10.248073657s" podCreationTimestamp="2025-09-29 11:36:55 +0000 UTC" firstStartedPulling="2025-09-29 11:36:57.135104159 +0000 UTC m=+7152.991032187" lastFinishedPulling="2025-09-29 11:37:04.246241368 +0000 UTC m=+7160.102169396" observedRunningTime="2025-09-29 11:37:05.242497152 +0000 UTC m=+7161.098425180" watchObservedRunningTime="2025-09-29 11:37:05.248073657 +0000 UTC m=+7161.104001685" Sep 29 11:37:05 crc kubenswrapper[4991]: I0929 11:37:05.452440 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:05 crc kubenswrapper[4991]: I0929 11:37:05.452480 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:06 crc kubenswrapper[4991]: I0929 11:37:06.520109 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmcdm" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" probeResult="failure" output=< Sep 29 11:37:06 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:37:06 crc kubenswrapper[4991]: > Sep 29 11:37:07 crc kubenswrapper[4991]: I0929 11:37:07.946415 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:37:07 crc kubenswrapper[4991]: I0929 11:37:07.946783 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:37:16 crc kubenswrapper[4991]: I0929 11:37:16.509362 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmcdm" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" probeResult="failure" output=< Sep 29 11:37:16 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:37:16 crc kubenswrapper[4991]: > Sep 29 11:37:26 crc kubenswrapper[4991]: I0929 11:37:26.516989 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmcdm" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" probeResult="failure" output=< Sep 29 11:37:26 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:37:26 crc kubenswrapper[4991]: > Sep 29 11:37:35 crc kubenswrapper[4991]: I0929 11:37:35.503841 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:35 crc kubenswrapper[4991]: I0929 11:37:35.649446 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:35 crc kubenswrapper[4991]: I0929 11:37:35.743779 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:37:36 crc kubenswrapper[4991]: I0929 11:37:36.670769 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kmcdm" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" containerID="cri-o://e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa" gracePeriod=2 Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.328701 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.371325 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content\") pod \"346ab2e9-4163-42fe-b95d-7546db1fc305\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.371421 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities\") pod \"346ab2e9-4163-42fe-b95d-7546db1fc305\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.371745 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpqmp\" (UniqueName: \"kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp\") pod \"346ab2e9-4163-42fe-b95d-7546db1fc305\" (UID: \"346ab2e9-4163-42fe-b95d-7546db1fc305\") " Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.380358 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp" (OuterVolumeSpecName: "kube-api-access-jpqmp") pod "346ab2e9-4163-42fe-b95d-7546db1fc305" (UID: "346ab2e9-4163-42fe-b95d-7546db1fc305"). InnerVolumeSpecName "kube-api-access-jpqmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.381422 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities" (OuterVolumeSpecName: "utilities") pod "346ab2e9-4163-42fe-b95d-7546db1fc305" (UID: "346ab2e9-4163-42fe-b95d-7546db1fc305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.475118 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpqmp\" (UniqueName: \"kubernetes.io/projected/346ab2e9-4163-42fe-b95d-7546db1fc305-kube-api-access-jpqmp\") on node \"crc\" DevicePath \"\"" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.475154 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.480165 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "346ab2e9-4163-42fe-b95d-7546db1fc305" (UID: "346ab2e9-4163-42fe-b95d-7546db1fc305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.578756 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346ab2e9-4163-42fe-b95d-7546db1fc305-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.683742 4991 generic.go:334] "Generic (PLEG): container finished" podID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerID="e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa" exitCode=0 Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.683782 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerDied","Data":"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa"} Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.683807 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmcdm" event={"ID":"346ab2e9-4163-42fe-b95d-7546db1fc305","Type":"ContainerDied","Data":"e71c3bbaac95f06573a551ad7fff308f3d4fa588126ab1af3f3d7b2ebf5c8879"} Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.683824 4991 scope.go:117] "RemoveContainer" containerID="e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.683814 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmcdm" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.718131 4991 scope.go:117] "RemoveContainer" containerID="9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.756663 4991 scope.go:117] "RemoveContainer" containerID="37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.764017 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.775034 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kmcdm"] Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.806511 4991 scope.go:117] "RemoveContainer" containerID="e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa" Sep 29 11:37:37 crc kubenswrapper[4991]: E0929 11:37:37.807174 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa\": container with ID starting with e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa not found: ID does not exist" containerID="e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.807232 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa"} err="failed to get container status \"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa\": rpc error: code = NotFound desc = could not find container \"e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa\": container with ID starting with e1f6b6d2b0e055f7c87dcfbfb1d4b1f0d0256839d0c0fa8487ee2d6459f045fa not found: ID does not exist" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.807259 4991 scope.go:117] "RemoveContainer" containerID="9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6" Sep 29 11:37:37 crc kubenswrapper[4991]: E0929 11:37:37.807638 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6\": container with ID starting with 9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6 not found: ID does not exist" containerID="9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.807668 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6"} err="failed to get container status \"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6\": rpc error: code = NotFound desc = could not find container \"9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6\": container with ID starting with 9b9f9d9134a55adddfe247d1e320fdd2d013573a91e1413b44ed6657f55b35d6 not found: ID does not exist" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.807689 4991 scope.go:117] "RemoveContainer" containerID="37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143" Sep 29 11:37:37 crc kubenswrapper[4991]: E0929 11:37:37.808070 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143\": container with ID starting with 37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143 not found: ID does not exist" containerID="37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.808146 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143"} err="failed to get container status \"37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143\": rpc error: code = NotFound desc = could not find container \"37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143\": container with ID starting with 37b2196dd1d8467f9835743338160781c63906bb73e9b9a50d3fe8990092a143 not found: ID does not exist" Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.947413 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:37:37 crc kubenswrapper[4991]: I0929 11:37:37.947487 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:37:38 crc kubenswrapper[4991]: I0929 11:37:38.944603 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" path="/var/lib/kubelet/pods/346ab2e9-4163-42fe-b95d-7546db1fc305/volumes" Sep 29 11:38:07 crc kubenswrapper[4991]: I0929 11:38:07.947447 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:38:07 crc kubenswrapper[4991]: I0929 11:38:07.947989 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:38:07 crc kubenswrapper[4991]: I0929 11:38:07.948050 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:38:07 crc kubenswrapper[4991]: I0929 11:38:07.949124 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:38:07 crc kubenswrapper[4991]: I0929 11:38:07.949193 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352" gracePeriod=600 Sep 29 11:38:09 crc kubenswrapper[4991]: I0929 11:38:09.062303 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352" exitCode=0 Sep 29 11:38:09 crc kubenswrapper[4991]: I0929 11:38:09.062381 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352"} Sep 29 11:38:09 crc kubenswrapper[4991]: I0929 11:38:09.063142 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614"} Sep 29 11:38:09 crc kubenswrapper[4991]: I0929 11:38:09.063167 4991 scope.go:117] "RemoveContainer" containerID="dba766e90e826544e9a7f151dfe0d9e3dcc4033c8c172b03af076adc74d04b8a" Sep 29 11:40:37 crc kubenswrapper[4991]: I0929 11:40:37.946791 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:40:37 crc kubenswrapper[4991]: I0929 11:40:37.947382 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:41:07 crc kubenswrapper[4991]: I0929 11:41:07.947487 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:41:07 crc kubenswrapper[4991]: I0929 11:41:07.948517 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:41:37 crc kubenswrapper[4991]: I0929 11:41:37.946391 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:41:37 crc kubenswrapper[4991]: I0929 11:41:37.946931 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:41:37 crc kubenswrapper[4991]: I0929 11:41:37.947000 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:41:37 crc kubenswrapper[4991]: I0929 11:41:37.948109 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:41:37 crc kubenswrapper[4991]: I0929 11:41:37.948165 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" gracePeriod=600 Sep 29 11:41:38 crc kubenswrapper[4991]: I0929 11:41:38.495292 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" exitCode=0 Sep 29 11:41:38 crc kubenswrapper[4991]: I0929 11:41:38.495302 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614"} Sep 29 11:41:38 crc kubenswrapper[4991]: I0929 11:41:38.495537 4991 scope.go:117] "RemoveContainer" containerID="a3028cbc84a412a42c2b204f6b1f7d967251888df90e614f215821556de47352" Sep 29 11:41:38 crc kubenswrapper[4991]: E0929 11:41:38.622681 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:41:39 crc kubenswrapper[4991]: I0929 11:41:39.516151 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:41:39 crc kubenswrapper[4991]: E0929 11:41:39.516894 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:41:51 crc kubenswrapper[4991]: I0929 11:41:51.926506 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:41:51 crc kubenswrapper[4991]: E0929 11:41:51.927528 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:42:05 crc kubenswrapper[4991]: I0929 11:42:05.926463 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:42:05 crc kubenswrapper[4991]: E0929 11:42:05.927346 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:42:16 crc kubenswrapper[4991]: I0929 11:42:16.927261 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:42:16 crc kubenswrapper[4991]: E0929 11:42:16.928389 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:42:28 crc kubenswrapper[4991]: I0929 11:42:28.926848 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:42:28 crc kubenswrapper[4991]: E0929 11:42:28.927782 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:42:39 crc kubenswrapper[4991]: I0929 11:42:39.926285 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:42:39 crc kubenswrapper[4991]: E0929 11:42:39.927053 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:42:50 crc kubenswrapper[4991]: I0929 11:42:50.926868 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:42:50 crc kubenswrapper[4991]: E0929 11:42:50.928402 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:02 crc kubenswrapper[4991]: I0929 11:43:02.926574 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:43:02 crc kubenswrapper[4991]: E0929 11:43:02.927355 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:14 crc kubenswrapper[4991]: I0929 11:43:14.937672 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:43:14 crc kubenswrapper[4991]: E0929 11:43:14.938775 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:26 crc kubenswrapper[4991]: I0929 11:43:26.925838 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:43:26 crc kubenswrapper[4991]: E0929 11:43:26.927050 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:40 crc kubenswrapper[4991]: I0929 11:43:40.927624 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:43:40 crc kubenswrapper[4991]: E0929 11:43:40.928556 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:51 crc kubenswrapper[4991]: I0929 11:43:51.927478 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:43:51 crc kubenswrapper[4991]: E0929 11:43:51.928847 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.520915 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:43:53 crc kubenswrapper[4991]: E0929 11:43:53.522035 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="extract-utilities" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.522051 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="extract-utilities" Sep 29 11:43:53 crc kubenswrapper[4991]: E0929 11:43:53.522085 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.522093 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" Sep 29 11:43:53 crc kubenswrapper[4991]: E0929 11:43:53.522133 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="extract-content" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.522141 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="extract-content" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.522367 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="346ab2e9-4163-42fe-b95d-7546db1fc305" containerName="registry-server" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.524244 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.541361 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.726205 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9kz\" (UniqueName: \"kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.726370 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.726412 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.828788 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.829213 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.829315 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.829430 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9kz\" (UniqueName: \"kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.829681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:53 crc kubenswrapper[4991]: I0929 11:43:53.855851 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9kz\" (UniqueName: \"kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz\") pod \"community-operators-d2x2x\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:54 crc kubenswrapper[4991]: I0929 11:43:54.146863 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:43:54 crc kubenswrapper[4991]: I0929 11:43:54.622396 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:43:55 crc kubenswrapper[4991]: I0929 11:43:55.052974 4991 generic.go:334] "Generic (PLEG): container finished" podID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerID="d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9" exitCode=0 Sep 29 11:43:55 crc kubenswrapper[4991]: I0929 11:43:55.053086 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerDied","Data":"d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9"} Sep 29 11:43:55 crc kubenswrapper[4991]: I0929 11:43:55.053307 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerStarted","Data":"d5815b6dd551cd619e36482edb9014f9f69cbfddfbb6ab45b9410548884d0602"} Sep 29 11:43:55 crc kubenswrapper[4991]: I0929 11:43:55.055924 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:43:56 crc kubenswrapper[4991]: I0929 11:43:56.068511 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerStarted","Data":"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae"} Sep 29 11:43:58 crc kubenswrapper[4991]: I0929 11:43:58.093328 4991 generic.go:334] "Generic (PLEG): container finished" podID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerID="fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae" exitCode=0 Sep 29 11:43:58 crc kubenswrapper[4991]: I0929 11:43:58.093408 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerDied","Data":"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae"} Sep 29 11:44:00 crc kubenswrapper[4991]: I0929 11:44:00.130821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerStarted","Data":"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1"} Sep 29 11:44:00 crc kubenswrapper[4991]: I0929 11:44:00.168630 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d2x2x" podStartSLOduration=3.032278213 podStartE2EDuration="7.168607986s" podCreationTimestamp="2025-09-29 11:43:53 +0000 UTC" firstStartedPulling="2025-09-29 11:43:55.055696523 +0000 UTC m=+7570.911624551" lastFinishedPulling="2025-09-29 11:43:59.192026286 +0000 UTC m=+7575.047954324" observedRunningTime="2025-09-29 11:44:00.159465188 +0000 UTC m=+7576.015393236" watchObservedRunningTime="2025-09-29 11:44:00.168607986 +0000 UTC m=+7576.024536034" Sep 29 11:44:04 crc kubenswrapper[4991]: I0929 11:44:04.147942 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:04 crc kubenswrapper[4991]: I0929 11:44:04.148548 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:04 crc kubenswrapper[4991]: I0929 11:44:04.201757 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:04 crc kubenswrapper[4991]: I0929 11:44:04.256172 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:04 crc kubenswrapper[4991]: I0929 11:44:04.438401 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.196315 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d2x2x" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="registry-server" containerID="cri-o://c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1" gracePeriod=2 Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.708241 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.851812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities\") pod \"3a418a0b-0f71-45c4-899c-cec5309c32b2\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.851891 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content\") pod \"3a418a0b-0f71-45c4-899c-cec5309c32b2\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.852164 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9kz\" (UniqueName: \"kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz\") pod \"3a418a0b-0f71-45c4-899c-cec5309c32b2\" (UID: \"3a418a0b-0f71-45c4-899c-cec5309c32b2\") " Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.852727 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities" (OuterVolumeSpecName: "utilities") pod "3a418a0b-0f71-45c4-899c-cec5309c32b2" (UID: "3a418a0b-0f71-45c4-899c-cec5309c32b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.853042 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.858287 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz" (OuterVolumeSpecName: "kube-api-access-zc9kz") pod "3a418a0b-0f71-45c4-899c-cec5309c32b2" (UID: "3a418a0b-0f71-45c4-899c-cec5309c32b2"). InnerVolumeSpecName "kube-api-access-zc9kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.909333 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a418a0b-0f71-45c4-899c-cec5309c32b2" (UID: "3a418a0b-0f71-45c4-899c-cec5309c32b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.927336 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:44:06 crc kubenswrapper[4991]: E0929 11:44:06.927817 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.955616 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a418a0b-0f71-45c4-899c-cec5309c32b2-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:44:06 crc kubenswrapper[4991]: I0929 11:44:06.955659 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9kz\" (UniqueName: \"kubernetes.io/projected/3a418a0b-0f71-45c4-899c-cec5309c32b2-kube-api-access-zc9kz\") on node \"crc\" DevicePath \"\"" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.210058 4991 generic.go:334] "Generic (PLEG): container finished" podID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerID="c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1" exitCode=0 Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.210137 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2x2x" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.210147 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerDied","Data":"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1"} Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.211309 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2x2x" event={"ID":"3a418a0b-0f71-45c4-899c-cec5309c32b2","Type":"ContainerDied","Data":"d5815b6dd551cd619e36482edb9014f9f69cbfddfbb6ab45b9410548884d0602"} Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.211332 4991 scope.go:117] "RemoveContainer" containerID="c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.241104 4991 scope.go:117] "RemoveContainer" containerID="fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.249487 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.262267 4991 scope.go:117] "RemoveContainer" containerID="d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.266311 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d2x2x"] Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.319084 4991 scope.go:117] "RemoveContainer" containerID="c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1" Sep 29 11:44:07 crc kubenswrapper[4991]: E0929 11:44:07.319510 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1\": container with ID starting with c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1 not found: ID does not exist" containerID="c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.319577 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1"} err="failed to get container status \"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1\": rpc error: code = NotFound desc = could not find container \"c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1\": container with ID starting with c04b2248457e6e6567888f17d854a6ccb689607c5a217545b90de2332fd50dd1 not found: ID does not exist" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.319608 4991 scope.go:117] "RemoveContainer" containerID="fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae" Sep 29 11:44:07 crc kubenswrapper[4991]: E0929 11:44:07.320312 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae\": container with ID starting with fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae not found: ID does not exist" containerID="fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.320344 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae"} err="failed to get container status \"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae\": rpc error: code = NotFound desc = could not find container \"fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae\": container with ID starting with fe50012a2a20dfcd6cd36e55e0bdf064204717b30d93b9beb7ac21f783b16cae not found: ID does not exist" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.320362 4991 scope.go:117] "RemoveContainer" containerID="d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9" Sep 29 11:44:07 crc kubenswrapper[4991]: E0929 11:44:07.320638 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9\": container with ID starting with d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9 not found: ID does not exist" containerID="d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9" Sep 29 11:44:07 crc kubenswrapper[4991]: I0929 11:44:07.320662 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9"} err="failed to get container status \"d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9\": rpc error: code = NotFound desc = could not find container \"d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9\": container with ID starting with d1b6f5c7bfdd59715c413d8822432016862c59248ac49bd75697ef851fbd79d9 not found: ID does not exist" Sep 29 11:44:08 crc kubenswrapper[4991]: I0929 11:44:08.938079 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" path="/var/lib/kubelet/pods/3a418a0b-0f71-45c4-899c-cec5309c32b2/volumes" Sep 29 11:44:19 crc kubenswrapper[4991]: I0929 11:44:19.926245 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:44:19 crc kubenswrapper[4991]: E0929 11:44:19.926859 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:44:33 crc kubenswrapper[4991]: I0929 11:44:33.926173 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:44:33 crc kubenswrapper[4991]: E0929 11:44:33.927039 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:44:48 crc kubenswrapper[4991]: I0929 11:44:48.927440 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:44:48 crc kubenswrapper[4991]: E0929 11:44:48.928328 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.154441 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq"] Sep 29 11:45:00 crc kubenswrapper[4991]: E0929 11:45:00.155779 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="extract-content" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.155800 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="extract-content" Sep 29 11:45:00 crc kubenswrapper[4991]: E0929 11:45:00.155812 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="registry-server" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.155818 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="registry-server" Sep 29 11:45:00 crc kubenswrapper[4991]: E0929 11:45:00.155835 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="extract-utilities" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.155843 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="extract-utilities" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.156122 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a418a0b-0f71-45c4-899c-cec5309c32b2" containerName="registry-server" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.157119 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.162379 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.162670 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.196860 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq"] Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.270630 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.270806 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vpl\" (UniqueName: \"kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.270995 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.373434 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.373670 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.373736 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vpl\" (UniqueName: \"kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.374537 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.380850 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.393450 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vpl\" (UniqueName: \"kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl\") pod \"collect-profiles-29319105-qw4sq\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.492147 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:00 crc kubenswrapper[4991]: I0929 11:45:00.999009 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq"] Sep 29 11:45:01 crc kubenswrapper[4991]: I0929 11:45:01.821173 4991 generic.go:334] "Generic (PLEG): container finished" podID="d3da2ced-b1b2-46eb-813d-d6edcab246d8" containerID="45d07fe9ad5a6c220243536a0851bbe2214d0d1a19880e205204380afc4067e5" exitCode=0 Sep 29 11:45:01 crc kubenswrapper[4991]: I0929 11:45:01.821456 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" event={"ID":"d3da2ced-b1b2-46eb-813d-d6edcab246d8","Type":"ContainerDied","Data":"45d07fe9ad5a6c220243536a0851bbe2214d0d1a19880e205204380afc4067e5"} Sep 29 11:45:01 crc kubenswrapper[4991]: I0929 11:45:01.821704 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" event={"ID":"d3da2ced-b1b2-46eb-813d-d6edcab246d8","Type":"ContainerStarted","Data":"1d246a4e3032327deb5bc845a2337867d31f2fef2ae89d82b5c5c70d5f7c70f3"} Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.311259 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.362285 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume\") pod \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.362429 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74vpl\" (UniqueName: \"kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl\") pod \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.362498 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume\") pod \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\" (UID: \"d3da2ced-b1b2-46eb-813d-d6edcab246d8\") " Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.363584 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d3da2ced-b1b2-46eb-813d-d6edcab246d8" (UID: "d3da2ced-b1b2-46eb-813d-d6edcab246d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.368150 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl" (OuterVolumeSpecName: "kube-api-access-74vpl") pod "d3da2ced-b1b2-46eb-813d-d6edcab246d8" (UID: "d3da2ced-b1b2-46eb-813d-d6edcab246d8"). InnerVolumeSpecName "kube-api-access-74vpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.369473 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d3da2ced-b1b2-46eb-813d-d6edcab246d8" (UID: "d3da2ced-b1b2-46eb-813d-d6edcab246d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.468503 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74vpl\" (UniqueName: \"kubernetes.io/projected/d3da2ced-b1b2-46eb-813d-d6edcab246d8-kube-api-access-74vpl\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.468545 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3da2ced-b1b2-46eb-813d-d6edcab246d8-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.468650 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3da2ced-b1b2-46eb-813d-d6edcab246d8-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.854712 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" event={"ID":"d3da2ced-b1b2-46eb-813d-d6edcab246d8","Type":"ContainerDied","Data":"1d246a4e3032327deb5bc845a2337867d31f2fef2ae89d82b5c5c70d5f7c70f3"} Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.855233 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d246a4e3032327deb5bc845a2337867d31f2fef2ae89d82b5c5c70d5f7c70f3" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.854842 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq" Sep 29 11:45:03 crc kubenswrapper[4991]: I0929 11:45:03.926025 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:45:03 crc kubenswrapper[4991]: E0929 11:45:03.926418 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:45:04 crc kubenswrapper[4991]: I0929 11:45:04.389220 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7"] Sep 29 11:45:04 crc kubenswrapper[4991]: I0929 11:45:04.400476 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319060-hxxx7"] Sep 29 11:45:04 crc kubenswrapper[4991]: I0929 11:45:04.947458 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ea0ca7-8b75-4d88-bfce-dbc64c637e0d" path="/var/lib/kubelet/pods/59ea0ca7-8b75-4d88-bfce-dbc64c637e0d/volumes" Sep 29 11:45:18 crc kubenswrapper[4991]: I0929 11:45:18.926432 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:45:18 crc kubenswrapper[4991]: E0929 11:45:18.927440 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.145920 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:20 crc kubenswrapper[4991]: E0929 11:45:20.146536 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3da2ced-b1b2-46eb-813d-d6edcab246d8" containerName="collect-profiles" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.146551 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3da2ced-b1b2-46eb-813d-d6edcab246d8" containerName="collect-profiles" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.146850 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3da2ced-b1b2-46eb-813d-d6edcab246d8" containerName="collect-profiles" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.148641 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.169261 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.321836 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.322438 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.322540 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.424932 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.425021 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.425123 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.425570 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.425592 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.452464 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt\") pod \"certified-operators-sdnn7\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:20 crc kubenswrapper[4991]: I0929 11:45:20.484808 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:21 crc kubenswrapper[4991]: I0929 11:45:21.036321 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:21 crc kubenswrapper[4991]: I0929 11:45:21.081284 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerStarted","Data":"62115a703784a765fdba24577c2df901c56a9bad811e273b96692bc7889a74d2"} Sep 29 11:45:22 crc kubenswrapper[4991]: I0929 11:45:22.094832 4991 generic.go:334] "Generic (PLEG): container finished" podID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerID="94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa" exitCode=0 Sep 29 11:45:22 crc kubenswrapper[4991]: I0929 11:45:22.095004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerDied","Data":"94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa"} Sep 29 11:45:24 crc kubenswrapper[4991]: I0929 11:45:24.119360 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerStarted","Data":"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad"} Sep 29 11:45:26 crc kubenswrapper[4991]: I0929 11:45:26.146998 4991 generic.go:334] "Generic (PLEG): container finished" podID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerID="e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad" exitCode=0 Sep 29 11:45:26 crc kubenswrapper[4991]: I0929 11:45:26.147094 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerDied","Data":"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad"} Sep 29 11:45:26 crc kubenswrapper[4991]: I0929 11:45:26.562903 4991 scope.go:117] "RemoveContainer" containerID="16b69310fa1a06ff68020ad8d4cfa9a539f774f30fb2bfa29e5f653a9f760ed1" Sep 29 11:45:27 crc kubenswrapper[4991]: I0929 11:45:27.158894 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerStarted","Data":"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412"} Sep 29 11:45:27 crc kubenswrapper[4991]: I0929 11:45:27.193314 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sdnn7" podStartSLOduration=2.727977971 podStartE2EDuration="7.19329852s" podCreationTimestamp="2025-09-29 11:45:20 +0000 UTC" firstStartedPulling="2025-09-29 11:45:22.097759401 +0000 UTC m=+7657.953687429" lastFinishedPulling="2025-09-29 11:45:26.56307994 +0000 UTC m=+7662.419007978" observedRunningTime="2025-09-29 11:45:27.185589039 +0000 UTC m=+7663.041517067" watchObservedRunningTime="2025-09-29 11:45:27.19329852 +0000 UTC m=+7663.049226548" Sep 29 11:45:30 crc kubenswrapper[4991]: I0929 11:45:30.484991 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:30 crc kubenswrapper[4991]: I0929 11:45:30.485254 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:30 crc kubenswrapper[4991]: I0929 11:45:30.532865 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:30 crc kubenswrapper[4991]: I0929 11:45:30.926866 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:45:30 crc kubenswrapper[4991]: E0929 11:45:30.927171 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:45:31 crc kubenswrapper[4991]: I0929 11:45:31.299845 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:31 crc kubenswrapper[4991]: I0929 11:45:31.360773 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.217861 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sdnn7" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="registry-server" containerID="cri-o://c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412" gracePeriod=2 Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.793345 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.880783 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content\") pod \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.881029 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities\") pod \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.881301 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt\") pod \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\" (UID: \"8eafbf59-8f48-4038-9f00-6e2872f8fc68\") " Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.882159 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities" (OuterVolumeSpecName: "utilities") pod "8eafbf59-8f48-4038-9f00-6e2872f8fc68" (UID: "8eafbf59-8f48-4038-9f00-6e2872f8fc68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.888239 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt" (OuterVolumeSpecName: "kube-api-access-9b5xt") pod "8eafbf59-8f48-4038-9f00-6e2872f8fc68" (UID: "8eafbf59-8f48-4038-9f00-6e2872f8fc68"). InnerVolumeSpecName "kube-api-access-9b5xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.936800 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eafbf59-8f48-4038-9f00-6e2872f8fc68" (UID: "8eafbf59-8f48-4038-9f00-6e2872f8fc68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.985061 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.985111 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/8eafbf59-8f48-4038-9f00-6e2872f8fc68-kube-api-access-9b5xt\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:33 crc kubenswrapper[4991]: I0929 11:45:33.985129 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafbf59-8f48-4038-9f00-6e2872f8fc68-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.234156 4991 generic.go:334] "Generic (PLEG): container finished" podID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerID="c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412" exitCode=0 Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.234480 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerDied","Data":"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412"} Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.234693 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdnn7" event={"ID":"8eafbf59-8f48-4038-9f00-6e2872f8fc68","Type":"ContainerDied","Data":"62115a703784a765fdba24577c2df901c56a9bad811e273b96692bc7889a74d2"} Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.234718 4991 scope.go:117] "RemoveContainer" containerID="c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.234883 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdnn7" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.279306 4991 scope.go:117] "RemoveContainer" containerID="e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.288072 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.309062 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sdnn7"] Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.338863 4991 scope.go:117] "RemoveContainer" containerID="94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.399571 4991 scope.go:117] "RemoveContainer" containerID="c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412" Sep 29 11:45:34 crc kubenswrapper[4991]: E0929 11:45:34.400473 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412\": container with ID starting with c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412 not found: ID does not exist" containerID="c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.400632 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412"} err="failed to get container status \"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412\": rpc error: code = NotFound desc = could not find container \"c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412\": container with ID starting with c07707263eccbf719d7bf564d4dde874e2f8f2a931ab05e9c4df1bec00fc6412 not found: ID does not exist" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.400664 4991 scope.go:117] "RemoveContainer" containerID="e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad" Sep 29 11:45:34 crc kubenswrapper[4991]: E0929 11:45:34.401165 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad\": container with ID starting with e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad not found: ID does not exist" containerID="e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.401196 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad"} err="failed to get container status \"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad\": rpc error: code = NotFound desc = could not find container \"e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad\": container with ID starting with e9947b0944ac7a2f488227562f2e396296167bf1768461c3ed95dba08961e4ad not found: ID does not exist" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.401217 4991 scope.go:117] "RemoveContainer" containerID="94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa" Sep 29 11:45:34 crc kubenswrapper[4991]: E0929 11:45:34.401469 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa\": container with ID starting with 94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa not found: ID does not exist" containerID="94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.401494 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa"} err="failed to get container status \"94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa\": rpc error: code = NotFound desc = could not find container \"94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa\": container with ID starting with 94060e9321be3476c26178d906229e4be4bec987c6955813272f4380bc45faaa not found: ID does not exist" Sep 29 11:45:34 crc kubenswrapper[4991]: I0929 11:45:34.941545 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" path="/var/lib/kubelet/pods/8eafbf59-8f48-4038-9f00-6e2872f8fc68/volumes" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.042740 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:41 crc kubenswrapper[4991]: E0929 11:45:41.043747 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="extract-utilities" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.043763 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="extract-utilities" Sep 29 11:45:41 crc kubenswrapper[4991]: E0929 11:45:41.043794 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="registry-server" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.043802 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="registry-server" Sep 29 11:45:41 crc kubenswrapper[4991]: E0929 11:45:41.043818 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="extract-content" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.043823 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="extract-content" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.044087 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eafbf59-8f48-4038-9f00-6e2872f8fc68" containerName="registry-server" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.045843 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.053883 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.083743 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.083797 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mx9\" (UniqueName: \"kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.083906 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.186637 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.186869 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.186915 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mx9\" (UniqueName: \"kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.187203 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.187306 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.210659 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mx9\" (UniqueName: \"kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9\") pod \"redhat-marketplace-rjx9l\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.377502 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:41 crc kubenswrapper[4991]: I0929 11:45:41.859146 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:42 crc kubenswrapper[4991]: I0929 11:45:42.347007 4991 generic.go:334] "Generic (PLEG): container finished" podID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerID="2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75" exitCode=0 Sep 29 11:45:42 crc kubenswrapper[4991]: I0929 11:45:42.347055 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerDied","Data":"2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75"} Sep 29 11:45:42 crc kubenswrapper[4991]: I0929 11:45:42.347366 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerStarted","Data":"15c2ffdb8f447dea2e2db47210a4e3729f0f1107414f2538f19de665e8749670"} Sep 29 11:45:43 crc kubenswrapper[4991]: I0929 11:45:43.926124 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:45:43 crc kubenswrapper[4991]: E0929 11:45:43.926920 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:45:44 crc kubenswrapper[4991]: I0929 11:45:44.379275 4991 generic.go:334] "Generic (PLEG): container finished" podID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerID="a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab" exitCode=0 Sep 29 11:45:44 crc kubenswrapper[4991]: I0929 11:45:44.379327 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerDied","Data":"a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab"} Sep 29 11:45:45 crc kubenswrapper[4991]: I0929 11:45:45.394511 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerStarted","Data":"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf"} Sep 29 11:45:45 crc kubenswrapper[4991]: I0929 11:45:45.415061 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjx9l" podStartSLOduration=2.765425301 podStartE2EDuration="5.415041658s" podCreationTimestamp="2025-09-29 11:45:40 +0000 UTC" firstStartedPulling="2025-09-29 11:45:42.349815903 +0000 UTC m=+7678.205743931" lastFinishedPulling="2025-09-29 11:45:44.99943226 +0000 UTC m=+7680.855360288" observedRunningTime="2025-09-29 11:45:45.413094758 +0000 UTC m=+7681.269022796" watchObservedRunningTime="2025-09-29 11:45:45.415041658 +0000 UTC m=+7681.270969686" Sep 29 11:45:51 crc kubenswrapper[4991]: I0929 11:45:51.378060 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:51 crc kubenswrapper[4991]: I0929 11:45:51.378721 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:51 crc kubenswrapper[4991]: I0929 11:45:51.457986 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:51 crc kubenswrapper[4991]: I0929 11:45:51.547520 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:51 crc kubenswrapper[4991]: I0929 11:45:51.702785 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:53 crc kubenswrapper[4991]: I0929 11:45:53.486416 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjx9l" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="registry-server" containerID="cri-o://6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf" gracePeriod=2 Sep 29 11:45:53 crc kubenswrapper[4991]: I0929 11:45:53.993931 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.038602 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content\") pod \"bdb5c314-da0b-454c-a792-f00e7bd0682a\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.038961 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities\") pod \"bdb5c314-da0b-454c-a792-f00e7bd0682a\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.039172 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9mx9\" (UniqueName: \"kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9\") pod \"bdb5c314-da0b-454c-a792-f00e7bd0682a\" (UID: \"bdb5c314-da0b-454c-a792-f00e7bd0682a\") " Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.040115 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities" (OuterVolumeSpecName: "utilities") pod "bdb5c314-da0b-454c-a792-f00e7bd0682a" (UID: "bdb5c314-da0b-454c-a792-f00e7bd0682a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.044947 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9" (OuterVolumeSpecName: "kube-api-access-n9mx9") pod "bdb5c314-da0b-454c-a792-f00e7bd0682a" (UID: "bdb5c314-da0b-454c-a792-f00e7bd0682a"). InnerVolumeSpecName "kube-api-access-n9mx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.053358 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdb5c314-da0b-454c-a792-f00e7bd0682a" (UID: "bdb5c314-da0b-454c-a792-f00e7bd0682a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.142458 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.142500 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdb5c314-da0b-454c-a792-f00e7bd0682a-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.142518 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9mx9\" (UniqueName: \"kubernetes.io/projected/bdb5c314-da0b-454c-a792-f00e7bd0682a-kube-api-access-n9mx9\") on node \"crc\" DevicePath \"\"" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.497833 4991 generic.go:334] "Generic (PLEG): container finished" podID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerID="6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf" exitCode=0 Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.497891 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerDied","Data":"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf"} Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.497923 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjx9l" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.498039 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjx9l" event={"ID":"bdb5c314-da0b-454c-a792-f00e7bd0682a","Type":"ContainerDied","Data":"15c2ffdb8f447dea2e2db47210a4e3729f0f1107414f2538f19de665e8749670"} Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.498084 4991 scope.go:117] "RemoveContainer" containerID="6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.518702 4991 scope.go:117] "RemoveContainer" containerID="a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.545208 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.558899 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjx9l"] Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.566460 4991 scope.go:117] "RemoveContainer" containerID="2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.628096 4991 scope.go:117] "RemoveContainer" containerID="6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf" Sep 29 11:45:54 crc kubenswrapper[4991]: E0929 11:45:54.629240 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf\": container with ID starting with 6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf not found: ID does not exist" containerID="6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.629283 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf"} err="failed to get container status \"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf\": rpc error: code = NotFound desc = could not find container \"6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf\": container with ID starting with 6dcbd3d0d2753391e422b21bf298b04f60589f9789d5a38234ff1b51437e7bbf not found: ID does not exist" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.629308 4991 scope.go:117] "RemoveContainer" containerID="a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab" Sep 29 11:45:54 crc kubenswrapper[4991]: E0929 11:45:54.629639 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab\": container with ID starting with a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab not found: ID does not exist" containerID="a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.629659 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab"} err="failed to get container status \"a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab\": rpc error: code = NotFound desc = could not find container \"a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab\": container with ID starting with a7479f61b38bf17d7b5050dd272e10d62c26cc4950cfe6ada7b01dd9332f1eab not found: ID does not exist" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.629671 4991 scope.go:117] "RemoveContainer" containerID="2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75" Sep 29 11:45:54 crc kubenswrapper[4991]: E0929 11:45:54.629874 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75\": container with ID starting with 2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75 not found: ID does not exist" containerID="2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.629897 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75"} err="failed to get container status \"2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75\": rpc error: code = NotFound desc = could not find container \"2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75\": container with ID starting with 2dd120d6702690243832c57042ad5185dad7a42dfab2f0922994ece3eef9fa75 not found: ID does not exist" Sep 29 11:45:54 crc kubenswrapper[4991]: I0929 11:45:54.939274 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" path="/var/lib/kubelet/pods/bdb5c314-da0b-454c-a792-f00e7bd0682a/volumes" Sep 29 11:45:56 crc kubenswrapper[4991]: I0929 11:45:56.926513 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:45:56 crc kubenswrapper[4991]: E0929 11:45:56.927303 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:46:08 crc kubenswrapper[4991]: I0929 11:46:08.926673 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:46:08 crc kubenswrapper[4991]: E0929 11:46:08.927904 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:46:20 crc kubenswrapper[4991]: I0929 11:46:20.926407 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:46:20 crc kubenswrapper[4991]: E0929 11:46:20.927308 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:46:31 crc kubenswrapper[4991]: I0929 11:46:31.926840 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:46:31 crc kubenswrapper[4991]: E0929 11:46:31.928149 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:46:44 crc kubenswrapper[4991]: I0929 11:46:44.933448 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:46:46 crc kubenswrapper[4991]: I0929 11:46:46.082319 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7"} Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.551291 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:47:49 crc kubenswrapper[4991]: E0929 11:47:49.552860 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="extract-utilities" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.552887 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="extract-utilities" Sep 29 11:47:49 crc kubenswrapper[4991]: E0929 11:47:49.552942 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="registry-server" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.552968 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="registry-server" Sep 29 11:47:49 crc kubenswrapper[4991]: E0929 11:47:49.552996 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="extract-content" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.553008 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="extract-content" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.553662 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb5c314-da0b-454c-a792-f00e7bd0682a" containerName="registry-server" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.569684 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.569814 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.692448 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.692778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvbk\" (UniqueName: \"kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.692942 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.794403 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.794570 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.794616 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvbk\" (UniqueName: \"kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.795349 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.795446 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.813258 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvbk\" (UniqueName: \"kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk\") pod \"redhat-operators-p8qzb\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:49 crc kubenswrapper[4991]: I0929 11:47:49.905903 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:50 crc kubenswrapper[4991]: I0929 11:47:50.448959 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:47:50 crc kubenswrapper[4991]: I0929 11:47:50.859339 4991 generic.go:334] "Generic (PLEG): container finished" podID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerID="52a6bcc4ffd01da3cc80823ead41541a190221f492e35fac4b750eaa56c200ff" exitCode=0 Sep 29 11:47:50 crc kubenswrapper[4991]: I0929 11:47:50.859405 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerDied","Data":"52a6bcc4ffd01da3cc80823ead41541a190221f492e35fac4b750eaa56c200ff"} Sep 29 11:47:50 crc kubenswrapper[4991]: I0929 11:47:50.859624 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerStarted","Data":"4e3f84d69b8d93f1bf30fd6c18aca71c259a5f0a68bfb8aa40982590de5cb9d2"} Sep 29 11:47:52 crc kubenswrapper[4991]: I0929 11:47:52.882533 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerStarted","Data":"01afad44dce0ee021af4ff2af7b01f3a628dad1882dced667d73a9d2f5bf5394"} Sep 29 11:47:57 crc kubenswrapper[4991]: I0929 11:47:57.935251 4991 generic.go:334] "Generic (PLEG): container finished" podID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerID="01afad44dce0ee021af4ff2af7b01f3a628dad1882dced667d73a9d2f5bf5394" exitCode=0 Sep 29 11:47:57 crc kubenswrapper[4991]: I0929 11:47:57.935313 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerDied","Data":"01afad44dce0ee021af4ff2af7b01f3a628dad1882dced667d73a9d2f5bf5394"} Sep 29 11:47:58 crc kubenswrapper[4991]: I0929 11:47:58.947454 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerStarted","Data":"4f1457301569a399a667e7332cb17839cf6540bf9c18146d6dad6074f67b54e7"} Sep 29 11:47:58 crc kubenswrapper[4991]: I0929 11:47:58.976404 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p8qzb" podStartSLOduration=2.367719027 podStartE2EDuration="9.976383652s" podCreationTimestamp="2025-09-29 11:47:49 +0000 UTC" firstStartedPulling="2025-09-29 11:47:50.861321639 +0000 UTC m=+7806.717249667" lastFinishedPulling="2025-09-29 11:47:58.469986264 +0000 UTC m=+7814.325914292" observedRunningTime="2025-09-29 11:47:58.965375944 +0000 UTC m=+7814.821303972" watchObservedRunningTime="2025-09-29 11:47:58.976383652 +0000 UTC m=+7814.832311680" Sep 29 11:47:59 crc kubenswrapper[4991]: I0929 11:47:59.909180 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:47:59 crc kubenswrapper[4991]: I0929 11:47:59.909470 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:48:00 crc kubenswrapper[4991]: I0929 11:48:00.973618 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p8qzb" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" probeResult="failure" output=< Sep 29 11:48:00 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:48:00 crc kubenswrapper[4991]: > Sep 29 11:48:10 crc kubenswrapper[4991]: I0929 11:48:10.957428 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p8qzb" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" probeResult="failure" output=< Sep 29 11:48:10 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:48:10 crc kubenswrapper[4991]: > Sep 29 11:48:19 crc kubenswrapper[4991]: I0929 11:48:19.955457 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:48:20 crc kubenswrapper[4991]: I0929 11:48:20.012726 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:48:20 crc kubenswrapper[4991]: I0929 11:48:20.737231 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:48:21 crc kubenswrapper[4991]: I0929 11:48:21.190018 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p8qzb" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" containerID="cri-o://4f1457301569a399a667e7332cb17839cf6540bf9c18146d6dad6074f67b54e7" gracePeriod=2 Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.206688 4991 generic.go:334] "Generic (PLEG): container finished" podID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerID="4f1457301569a399a667e7332cb17839cf6540bf9c18146d6dad6074f67b54e7" exitCode=0 Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.206740 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerDied","Data":"4f1457301569a399a667e7332cb17839cf6540bf9c18146d6dad6074f67b54e7"} Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.393604 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.496185 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zvbk\" (UniqueName: \"kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk\") pod \"9d0da4e2-bad6-4e61-91ad-28a737210a71\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.496293 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content\") pod \"9d0da4e2-bad6-4e61-91ad-28a737210a71\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.496687 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities\") pod \"9d0da4e2-bad6-4e61-91ad-28a737210a71\" (UID: \"9d0da4e2-bad6-4e61-91ad-28a737210a71\") " Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.498019 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities" (OuterVolumeSpecName: "utilities") pod "9d0da4e2-bad6-4e61-91ad-28a737210a71" (UID: "9d0da4e2-bad6-4e61-91ad-28a737210a71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.513412 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk" (OuterVolumeSpecName: "kube-api-access-5zvbk") pod "9d0da4e2-bad6-4e61-91ad-28a737210a71" (UID: "9d0da4e2-bad6-4e61-91ad-28a737210a71"). InnerVolumeSpecName "kube-api-access-5zvbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.596921 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d0da4e2-bad6-4e61-91ad-28a737210a71" (UID: "9d0da4e2-bad6-4e61-91ad-28a737210a71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.602474 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.602519 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zvbk\" (UniqueName: \"kubernetes.io/projected/9d0da4e2-bad6-4e61-91ad-28a737210a71-kube-api-access-5zvbk\") on node \"crc\" DevicePath \"\"" Sep 29 11:48:22 crc kubenswrapper[4991]: I0929 11:48:22.602532 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0da4e2-bad6-4e61-91ad-28a737210a71-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.220966 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8qzb" event={"ID":"9d0da4e2-bad6-4e61-91ad-28a737210a71","Type":"ContainerDied","Data":"4e3f84d69b8d93f1bf30fd6c18aca71c259a5f0a68bfb8aa40982590de5cb9d2"} Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.221077 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8qzb" Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.221316 4991 scope.go:117] "RemoveContainer" containerID="4f1457301569a399a667e7332cb17839cf6540bf9c18146d6dad6074f67b54e7" Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.254622 4991 scope.go:117] "RemoveContainer" containerID="01afad44dce0ee021af4ff2af7b01f3a628dad1882dced667d73a9d2f5bf5394" Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.263050 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.275678 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p8qzb"] Sep 29 11:48:23 crc kubenswrapper[4991]: I0929 11:48:23.287204 4991 scope.go:117] "RemoveContainer" containerID="52a6bcc4ffd01da3cc80823ead41541a190221f492e35fac4b750eaa56c200ff" Sep 29 11:48:24 crc kubenswrapper[4991]: I0929 11:48:24.939090 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" path="/var/lib/kubelet/pods/9d0da4e2-bad6-4e61-91ad-28a737210a71/volumes" Sep 29 11:49:07 crc kubenswrapper[4991]: I0929 11:49:07.946720 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:49:07 crc kubenswrapper[4991]: I0929 11:49:07.947382 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:49:37 crc kubenswrapper[4991]: I0929 11:49:37.947020 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:49:37 crc kubenswrapper[4991]: I0929 11:49:37.947450 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:50:07 crc kubenswrapper[4991]: I0929 11:50:07.947071 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:50:07 crc kubenswrapper[4991]: I0929 11:50:07.948631 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:50:07 crc kubenswrapper[4991]: I0929 11:50:07.948769 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:50:07 crc kubenswrapper[4991]: I0929 11:50:07.949686 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:50:07 crc kubenswrapper[4991]: I0929 11:50:07.949821 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7" gracePeriod=600 Sep 29 11:50:08 crc kubenswrapper[4991]: I0929 11:50:08.399827 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7" exitCode=0 Sep 29 11:50:08 crc kubenswrapper[4991]: I0929 11:50:08.399883 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7"} Sep 29 11:50:08 crc kubenswrapper[4991]: I0929 11:50:08.400216 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6"} Sep 29 11:50:08 crc kubenswrapper[4991]: I0929 11:50:08.400249 4991 scope.go:117] "RemoveContainer" containerID="8b1defccabbaeee2b53f740ee22f227371940f30efd8acd749f9e34ee2ee9614" Sep 29 11:52:37 crc kubenswrapper[4991]: I0929 11:52:37.947100 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:52:37 crc kubenswrapper[4991]: I0929 11:52:37.947756 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:53:07 crc kubenswrapper[4991]: I0929 11:53:07.946487 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:53:07 crc kubenswrapper[4991]: I0929 11:53:07.947196 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:53:37 crc kubenswrapper[4991]: I0929 11:53:37.946976 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 11:53:37 crc kubenswrapper[4991]: I0929 11:53:37.947596 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 11:53:37 crc kubenswrapper[4991]: I0929 11:53:37.947680 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 11:53:37 crc kubenswrapper[4991]: I0929 11:53:37.948708 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 11:53:37 crc kubenswrapper[4991]: I0929 11:53:37.948841 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" gracePeriod=600 Sep 29 11:53:38 crc kubenswrapper[4991]: E0929 11:53:38.219192 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:53:38 crc kubenswrapper[4991]: I0929 11:53:38.824220 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" exitCode=0 Sep 29 11:53:38 crc kubenswrapper[4991]: I0929 11:53:38.824282 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6"} Sep 29 11:53:38 crc kubenswrapper[4991]: I0929 11:53:38.824595 4991 scope.go:117] "RemoveContainer" containerID="bcdecb15348f6ae43aa58065ce2240ec8abc51089c5f6a1c3659fb7b82d39de7" Sep 29 11:53:38 crc kubenswrapper[4991]: I0929 11:53:38.825397 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:53:38 crc kubenswrapper[4991]: E0929 11:53:38.825711 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:53:49 crc kubenswrapper[4991]: I0929 11:53:49.926630 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:53:49 crc kubenswrapper[4991]: E0929 11:53:49.927439 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:54:02 crc kubenswrapper[4991]: I0929 11:54:02.926621 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:54:02 crc kubenswrapper[4991]: E0929 11:54:02.928533 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:54:16 crc kubenswrapper[4991]: I0929 11:54:16.927114 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:54:16 crc kubenswrapper[4991]: E0929 11:54:16.927922 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:54:29 crc kubenswrapper[4991]: I0929 11:54:29.927212 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:54:29 crc kubenswrapper[4991]: E0929 11:54:29.928114 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:54:44 crc kubenswrapper[4991]: I0929 11:54:44.938708 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:54:44 crc kubenswrapper[4991]: E0929 11:54:44.939485 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:54:56 crc kubenswrapper[4991]: I0929 11:54:56.926782 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:54:56 crc kubenswrapper[4991]: E0929 11:54:56.928474 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:55:10 crc kubenswrapper[4991]: I0929 11:55:10.928127 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:55:10 crc kubenswrapper[4991]: E0929 11:55:10.929111 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.291451 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:18 crc kubenswrapper[4991]: E0929 11:55:18.292617 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="extract-content" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.292636 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="extract-content" Sep 29 11:55:18 crc kubenswrapper[4991]: E0929 11:55:18.292685 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.292693 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" Sep 29 11:55:18 crc kubenswrapper[4991]: E0929 11:55:18.292711 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="extract-utilities" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.292720 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="extract-utilities" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.293015 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0da4e2-bad6-4e61-91ad-28a737210a71" containerName="registry-server" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.295252 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.322538 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.348817 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.348909 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrj5p\" (UniqueName: \"kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.349182 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.451693 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.451828 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrj5p\" (UniqueName: \"kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.451986 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.452396 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.452913 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.479335 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrj5p\" (UniqueName: \"kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p\") pod \"community-operators-84kr4\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:18 crc kubenswrapper[4991]: I0929 11:55:18.618804 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:19 crc kubenswrapper[4991]: I0929 11:55:19.191917 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:19 crc kubenswrapper[4991]: W0929 11:55:19.205919 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29b1e363_475a_44c5_9ee0_19963ffc7529.slice/crio-4eb08a314ccdcb6f92a2c066245ca33f51c037cf7dc3dbd0c098abfd2862d99d WatchSource:0}: Error finding container 4eb08a314ccdcb6f92a2c066245ca33f51c037cf7dc3dbd0c098abfd2862d99d: Status 404 returned error can't find the container with id 4eb08a314ccdcb6f92a2c066245ca33f51c037cf7dc3dbd0c098abfd2862d99d Sep 29 11:55:19 crc kubenswrapper[4991]: I0929 11:55:19.926440 4991 generic.go:334] "Generic (PLEG): container finished" podID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerID="2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f" exitCode=0 Sep 29 11:55:19 crc kubenswrapper[4991]: I0929 11:55:19.926574 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerDied","Data":"2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f"} Sep 29 11:55:19 crc kubenswrapper[4991]: I0929 11:55:19.926993 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerStarted","Data":"4eb08a314ccdcb6f92a2c066245ca33f51c037cf7dc3dbd0c098abfd2862d99d"} Sep 29 11:55:19 crc kubenswrapper[4991]: I0929 11:55:19.930052 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 11:55:21 crc kubenswrapper[4991]: I0929 11:55:21.951233 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerStarted","Data":"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9"} Sep 29 11:55:23 crc kubenswrapper[4991]: I0929 11:55:23.978780 4991 generic.go:334] "Generic (PLEG): container finished" podID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerID="ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9" exitCode=0 Sep 29 11:55:23 crc kubenswrapper[4991]: I0929 11:55:23.980051 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerDied","Data":"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9"} Sep 29 11:55:25 crc kubenswrapper[4991]: I0929 11:55:25.030372 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerStarted","Data":"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4"} Sep 29 11:55:25 crc kubenswrapper[4991]: I0929 11:55:25.090558 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84kr4" podStartSLOduration=2.582906693 podStartE2EDuration="7.090533005s" podCreationTimestamp="2025-09-29 11:55:18 +0000 UTC" firstStartedPulling="2025-09-29 11:55:19.929785393 +0000 UTC m=+8255.785713421" lastFinishedPulling="2025-09-29 11:55:24.437411705 +0000 UTC m=+8260.293339733" observedRunningTime="2025-09-29 11:55:25.075506702 +0000 UTC m=+8260.931434730" watchObservedRunningTime="2025-09-29 11:55:25.090533005 +0000 UTC m=+8260.946461033" Sep 29 11:55:25 crc kubenswrapper[4991]: I0929 11:55:25.927041 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:55:25 crc kubenswrapper[4991]: E0929 11:55:25.927338 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:55:28 crc kubenswrapper[4991]: I0929 11:55:28.620220 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:28 crc kubenswrapper[4991]: I0929 11:55:28.620808 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:28 crc kubenswrapper[4991]: I0929 11:55:28.677873 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:29 crc kubenswrapper[4991]: I0929 11:55:29.117513 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:29 crc kubenswrapper[4991]: I0929 11:55:29.164271 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.086280 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84kr4" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="registry-server" containerID="cri-o://be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4" gracePeriod=2 Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.588548 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.717855 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities\") pod \"29b1e363-475a-44c5-9ee0-19963ffc7529\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.718262 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrj5p\" (UniqueName: \"kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p\") pod \"29b1e363-475a-44c5-9ee0-19963ffc7529\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.718587 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content\") pod \"29b1e363-475a-44c5-9ee0-19963ffc7529\" (UID: \"29b1e363-475a-44c5-9ee0-19963ffc7529\") " Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.718881 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities" (OuterVolumeSpecName: "utilities") pod "29b1e363-475a-44c5-9ee0-19963ffc7529" (UID: "29b1e363-475a-44c5-9ee0-19963ffc7529"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.719473 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.725673 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p" (OuterVolumeSpecName: "kube-api-access-jrj5p") pod "29b1e363-475a-44c5-9ee0-19963ffc7529" (UID: "29b1e363-475a-44c5-9ee0-19963ffc7529"). InnerVolumeSpecName "kube-api-access-jrj5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.780688 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29b1e363-475a-44c5-9ee0-19963ffc7529" (UID: "29b1e363-475a-44c5-9ee0-19963ffc7529"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.822354 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b1e363-475a-44c5-9ee0-19963ffc7529-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:55:31 crc kubenswrapper[4991]: I0929 11:55:31.822388 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrj5p\" (UniqueName: \"kubernetes.io/projected/29b1e363-475a-44c5-9ee0-19963ffc7529-kube-api-access-jrj5p\") on node \"crc\" DevicePath \"\"" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.099028 4991 generic.go:334] "Generic (PLEG): container finished" podID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerID="be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4" exitCode=0 Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.099060 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerDied","Data":"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4"} Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.099103 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84kr4" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.099117 4991 scope.go:117] "RemoveContainer" containerID="be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.099102 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84kr4" event={"ID":"29b1e363-475a-44c5-9ee0-19963ffc7529","Type":"ContainerDied","Data":"4eb08a314ccdcb6f92a2c066245ca33f51c037cf7dc3dbd0c098abfd2862d99d"} Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.156202 4991 scope.go:117] "RemoveContainer" containerID="ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.186193 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.204573 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84kr4"] Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.225997 4991 scope.go:117] "RemoveContainer" containerID="2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.279613 4991 scope.go:117] "RemoveContainer" containerID="be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4" Sep 29 11:55:32 crc kubenswrapper[4991]: E0929 11:55:32.280143 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4\": container with ID starting with be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4 not found: ID does not exist" containerID="be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.280179 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4"} err="failed to get container status \"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4\": rpc error: code = NotFound desc = could not find container \"be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4\": container with ID starting with be8dd73a72cc89d14199a345e9a4366861474bdbdb9fcac9eb9d0d3378cb30f4 not found: ID does not exist" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.280206 4991 scope.go:117] "RemoveContainer" containerID="ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9" Sep 29 11:55:32 crc kubenswrapper[4991]: E0929 11:55:32.280563 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9\": container with ID starting with ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9 not found: ID does not exist" containerID="ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.280605 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9"} err="failed to get container status \"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9\": rpc error: code = NotFound desc = could not find container \"ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9\": container with ID starting with ee1237a954c1f4db02a8b48c1cd370aea5ce97c6930f8610d611c728170d74b9 not found: ID does not exist" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.280622 4991 scope.go:117] "RemoveContainer" containerID="2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f" Sep 29 11:55:32 crc kubenswrapper[4991]: E0929 11:55:32.281405 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f\": container with ID starting with 2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f not found: ID does not exist" containerID="2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.281450 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f"} err="failed to get container status \"2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f\": rpc error: code = NotFound desc = could not find container \"2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f\": container with ID starting with 2411b35b1a7cd1312f1e1ce955c020ffd852987d3adc1c821380f1ad9e5f1a6f not found: ID does not exist" Sep 29 11:55:32 crc kubenswrapper[4991]: I0929 11:55:32.944703 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" path="/var/lib/kubelet/pods/29b1e363-475a-44c5-9ee0-19963ffc7529/volumes" Sep 29 11:55:38 crc kubenswrapper[4991]: I0929 11:55:38.927183 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:55:38 crc kubenswrapper[4991]: E0929 11:55:38.928054 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:55:52 crc kubenswrapper[4991]: I0929 11:55:52.929637 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:55:52 crc kubenswrapper[4991]: E0929 11:55:52.931914 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:04 crc kubenswrapper[4991]: I0929 11:56:04.934433 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:56:04 crc kubenswrapper[4991]: E0929 11:56:04.935203 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:19 crc kubenswrapper[4991]: I0929 11:56:19.925993 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:56:19 crc kubenswrapper[4991]: E0929 11:56:19.926685 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.550253 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:20 crc kubenswrapper[4991]: E0929 11:56:20.550784 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="registry-server" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.550803 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="registry-server" Sep 29 11:56:20 crc kubenswrapper[4991]: E0929 11:56:20.550848 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="extract-utilities" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.550855 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="extract-utilities" Sep 29 11:56:20 crc kubenswrapper[4991]: E0929 11:56:20.550878 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="extract-content" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.550884 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="extract-content" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.551124 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b1e363-475a-44c5-9ee0-19963ffc7529" containerName="registry-server" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.552933 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.566501 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.726562 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.726993 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.727157 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92c5\" (UniqueName: \"kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.832500 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92c5\" (UniqueName: \"kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.832617 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.832647 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.833264 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.833299 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.854090 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92c5\" (UniqueName: \"kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5\") pod \"certified-operators-d2j7q\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:20 crc kubenswrapper[4991]: I0929 11:56:20.878878 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:21 crc kubenswrapper[4991]: I0929 11:56:21.411580 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:21 crc kubenswrapper[4991]: I0929 11:56:21.646380 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerStarted","Data":"264f9fb1efe457dc8f42774422d9e70e15bc5fa190434c18e19b229b215967cc"} Sep 29 11:56:21 crc kubenswrapper[4991]: I0929 11:56:21.646611 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerStarted","Data":"33dea95e46517542a89a4bebbaa8012ffdf1ef1c345c8a46047ab3d6e5dd14d1"} Sep 29 11:56:22 crc kubenswrapper[4991]: I0929 11:56:22.666709 4991 generic.go:334] "Generic (PLEG): container finished" podID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerID="264f9fb1efe457dc8f42774422d9e70e15bc5fa190434c18e19b229b215967cc" exitCode=0 Sep 29 11:56:22 crc kubenswrapper[4991]: I0929 11:56:22.666808 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerDied","Data":"264f9fb1efe457dc8f42774422d9e70e15bc5fa190434c18e19b229b215967cc"} Sep 29 11:56:23 crc kubenswrapper[4991]: I0929 11:56:23.680098 4991 generic.go:334] "Generic (PLEG): container finished" podID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerID="7110f2f08617aee4ef9baeeff957446b8f58cc57c4162d0e267ab779be45dc61" exitCode=0 Sep 29 11:56:23 crc kubenswrapper[4991]: I0929 11:56:23.680202 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerDied","Data":"7110f2f08617aee4ef9baeeff957446b8f58cc57c4162d0e267ab779be45dc61"} Sep 29 11:56:24 crc kubenswrapper[4991]: I0929 11:56:24.691485 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerStarted","Data":"0c31914fe0a8c883b1b8b1782c55aa932177a69465b93f51e81c4c50e06449ce"} Sep 29 11:56:24 crc kubenswrapper[4991]: I0929 11:56:24.716164 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2j7q" podStartSLOduration=2.202821742 podStartE2EDuration="4.716142683s" podCreationTimestamp="2025-09-29 11:56:20 +0000 UTC" firstStartedPulling="2025-09-29 11:56:21.648481502 +0000 UTC m=+8317.504409520" lastFinishedPulling="2025-09-29 11:56:24.161802433 +0000 UTC m=+8320.017730461" observedRunningTime="2025-09-29 11:56:24.709979462 +0000 UTC m=+8320.565907490" watchObservedRunningTime="2025-09-29 11:56:24.716142683 +0000 UTC m=+8320.572070711" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.879919 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.880434 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.926586 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:56:30 crc kubenswrapper[4991]: E0929 11:56:30.929694 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.949028 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.952569 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.952684 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:30 crc kubenswrapper[4991]: I0929 11:56:30.985128 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.102680 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.102965 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhcf\" (UniqueName: \"kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.102996 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.205077 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.205311 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhcf\" (UniqueName: \"kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.205335 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.205687 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.205734 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.226876 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhcf\" (UniqueName: \"kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf\") pod \"redhat-marketplace-fh4vl\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.287833 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.788275 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:31 crc kubenswrapper[4991]: I0929 11:56:31.822844 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:32 crc kubenswrapper[4991]: I0929 11:56:32.778188 4991 generic.go:334] "Generic (PLEG): container finished" podID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerID="fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3" exitCode=0 Sep 29 11:56:32 crc kubenswrapper[4991]: I0929 11:56:32.778278 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerDied","Data":"fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3"} Sep 29 11:56:32 crc kubenswrapper[4991]: I0929 11:56:32.778787 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerStarted","Data":"9dc3d2982889d188ad200ffb345cf0f0b06f762162ab21ba06346ea085428ece"} Sep 29 11:56:33 crc kubenswrapper[4991]: I0929 11:56:33.327403 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:33 crc kubenswrapper[4991]: I0929 11:56:33.787603 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2j7q" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="registry-server" containerID="cri-o://0c31914fe0a8c883b1b8b1782c55aa932177a69465b93f51e81c4c50e06449ce" gracePeriod=2 Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.798434 4991 generic.go:334] "Generic (PLEG): container finished" podID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerID="eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239" exitCode=0 Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.798519 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerDied","Data":"eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239"} Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.803821 4991 generic.go:334] "Generic (PLEG): container finished" podID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerID="0c31914fe0a8c883b1b8b1782c55aa932177a69465b93f51e81c4c50e06449ce" exitCode=0 Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.803879 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerDied","Data":"0c31914fe0a8c883b1b8b1782c55aa932177a69465b93f51e81c4c50e06449ce"} Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.803904 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2j7q" event={"ID":"e4dd7182-c1a5-43e5-8738-049ade97e75b","Type":"ContainerDied","Data":"33dea95e46517542a89a4bebbaa8012ffdf1ef1c345c8a46047ab3d6e5dd14d1"} Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.803933 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dea95e46517542a89a4bebbaa8012ffdf1ef1c345c8a46047ab3d6e5dd14d1" Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.810093 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.914471 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92c5\" (UniqueName: \"kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5\") pod \"e4dd7182-c1a5-43e5-8738-049ade97e75b\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.914706 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities\") pod \"e4dd7182-c1a5-43e5-8738-049ade97e75b\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.914875 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content\") pod \"e4dd7182-c1a5-43e5-8738-049ade97e75b\" (UID: \"e4dd7182-c1a5-43e5-8738-049ade97e75b\") " Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.915654 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities" (OuterVolumeSpecName: "utilities") pod "e4dd7182-c1a5-43e5-8738-049ade97e75b" (UID: "e4dd7182-c1a5-43e5-8738-049ade97e75b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.915916 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.920667 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5" (OuterVolumeSpecName: "kube-api-access-h92c5") pod "e4dd7182-c1a5-43e5-8738-049ade97e75b" (UID: "e4dd7182-c1a5-43e5-8738-049ade97e75b"). InnerVolumeSpecName "kube-api-access-h92c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:56:34 crc kubenswrapper[4991]: I0929 11:56:34.958061 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4dd7182-c1a5-43e5-8738-049ade97e75b" (UID: "e4dd7182-c1a5-43e5-8738-049ade97e75b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.018296 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4dd7182-c1a5-43e5-8738-049ade97e75b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.018658 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92c5\" (UniqueName: \"kubernetes.io/projected/e4dd7182-c1a5-43e5-8738-049ade97e75b-kube-api-access-h92c5\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.819179 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2j7q" Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.819167 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerStarted","Data":"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba"} Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.864138 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fh4vl" podStartSLOduration=3.366756611 podStartE2EDuration="5.864112475s" podCreationTimestamp="2025-09-29 11:56:30 +0000 UTC" firstStartedPulling="2025-09-29 11:56:32.780862158 +0000 UTC m=+8328.636790186" lastFinishedPulling="2025-09-29 11:56:35.278218022 +0000 UTC m=+8331.134146050" observedRunningTime="2025-09-29 11:56:35.856911527 +0000 UTC m=+8331.712839565" watchObservedRunningTime="2025-09-29 11:56:35.864112475 +0000 UTC m=+8331.720040513" Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.914927 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:35 crc kubenswrapper[4991]: I0929 11:56:35.929172 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2j7q"] Sep 29 11:56:36 crc kubenswrapper[4991]: I0929 11:56:36.958851 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" path="/var/lib/kubelet/pods/e4dd7182-c1a5-43e5-8738-049ade97e75b/volumes" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.288245 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.288870 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.339490 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.926182 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:56:41 crc kubenswrapper[4991]: E0929 11:56:41.926507 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.934723 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:41 crc kubenswrapper[4991]: I0929 11:56:41.991486 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:43 crc kubenswrapper[4991]: I0929 11:56:43.896408 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fh4vl" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="registry-server" containerID="cri-o://08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba" gracePeriod=2 Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.419673 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.550573 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities\") pod \"98fa266a-945d-4e5e-ac27-fe9dd8198384\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.550658 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjhcf\" (UniqueName: \"kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf\") pod \"98fa266a-945d-4e5e-ac27-fe9dd8198384\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.550766 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content\") pod \"98fa266a-945d-4e5e-ac27-fe9dd8198384\" (UID: \"98fa266a-945d-4e5e-ac27-fe9dd8198384\") " Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.551859 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities" (OuterVolumeSpecName: "utilities") pod "98fa266a-945d-4e5e-ac27-fe9dd8198384" (UID: "98fa266a-945d-4e5e-ac27-fe9dd8198384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.564446 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf" (OuterVolumeSpecName: "kube-api-access-bjhcf") pod "98fa266a-945d-4e5e-ac27-fe9dd8198384" (UID: "98fa266a-945d-4e5e-ac27-fe9dd8198384"). InnerVolumeSpecName "kube-api-access-bjhcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.565445 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98fa266a-945d-4e5e-ac27-fe9dd8198384" (UID: "98fa266a-945d-4e5e-ac27-fe9dd8198384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.653928 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.654185 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fa266a-945d-4e5e-ac27-fe9dd8198384-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.654249 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjhcf\" (UniqueName: \"kubernetes.io/projected/98fa266a-945d-4e5e-ac27-fe9dd8198384-kube-api-access-bjhcf\") on node \"crc\" DevicePath \"\"" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.920517 4991 generic.go:334] "Generic (PLEG): container finished" podID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerID="08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba" exitCode=0 Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.921059 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerDied","Data":"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba"} Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.921118 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fh4vl" event={"ID":"98fa266a-945d-4e5e-ac27-fe9dd8198384","Type":"ContainerDied","Data":"9dc3d2982889d188ad200ffb345cf0f0b06f762162ab21ba06346ea085428ece"} Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.921173 4991 scope.go:117] "RemoveContainer" containerID="08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.921218 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fh4vl" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.945781 4991 scope.go:117] "RemoveContainer" containerID="eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239" Sep 29 11:56:44 crc kubenswrapper[4991]: I0929 11:56:44.992553 4991 scope.go:117] "RemoveContainer" containerID="fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.003817 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.031259 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fh4vl"] Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.040746 4991 scope.go:117] "RemoveContainer" containerID="08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba" Sep 29 11:56:45 crc kubenswrapper[4991]: E0929 11:56:45.047132 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba\": container with ID starting with 08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba not found: ID does not exist" containerID="08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.047168 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba"} err="failed to get container status \"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba\": rpc error: code = NotFound desc = could not find container \"08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba\": container with ID starting with 08be643fcae28e311a9e37af1ba7d132931e2bd2339c0036b637f004adb539ba not found: ID does not exist" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.047192 4991 scope.go:117] "RemoveContainer" containerID="eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239" Sep 29 11:56:45 crc kubenswrapper[4991]: E0929 11:56:45.047602 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239\": container with ID starting with eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239 not found: ID does not exist" containerID="eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.047662 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239"} err="failed to get container status \"eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239\": rpc error: code = NotFound desc = could not find container \"eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239\": container with ID starting with eaf03f4a8f215c6f47b96ccb00f78891e5869d02d95ba48410fa512e1d0d3239 not found: ID does not exist" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.047689 4991 scope.go:117] "RemoveContainer" containerID="fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3" Sep 29 11:56:45 crc kubenswrapper[4991]: E0929 11:56:45.048083 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3\": container with ID starting with fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3 not found: ID does not exist" containerID="fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3" Sep 29 11:56:45 crc kubenswrapper[4991]: I0929 11:56:45.048104 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3"} err="failed to get container status \"fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3\": rpc error: code = NotFound desc = could not find container \"fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3\": container with ID starting with fe4a455db0dfe14062c82216ce94f3cbeb0c34cda02382230217d276c0128ca3 not found: ID does not exist" Sep 29 11:56:46 crc kubenswrapper[4991]: E0929 11:56:46.836696 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:56:46 crc kubenswrapper[4991]: I0929 11:56:46.940269 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" path="/var/lib/kubelet/pods/98fa266a-945d-4e5e-ac27-fe9dd8198384/volumes" Sep 29 11:56:48 crc kubenswrapper[4991]: E0929 11:56:48.105038 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:56:48 crc kubenswrapper[4991]: E0929 11:56:48.105986 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:56:49 crc kubenswrapper[4991]: E0929 11:56:49.635288 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:56:52 crc kubenswrapper[4991]: I0929 11:56:52.926359 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:56:52 crc kubenswrapper[4991]: E0929 11:56:52.926926 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:56:59 crc kubenswrapper[4991]: E0929 11:56:59.956413 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:01 crc kubenswrapper[4991]: E0929 11:57:01.597883 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:07 crc kubenswrapper[4991]: I0929 11:57:07.927097 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:57:07 crc kubenswrapper[4991]: E0929 11:57:07.927924 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:57:10 crc kubenswrapper[4991]: E0929 11:57:10.247439 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:16 crc kubenswrapper[4991]: E0929 11:57:16.848417 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:19 crc kubenswrapper[4991]: I0929 11:57:19.926329 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:57:19 crc kubenswrapper[4991]: E0929 11:57:19.926900 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:57:20 crc kubenswrapper[4991]: E0929 11:57:20.289857 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:30 crc kubenswrapper[4991]: E0929 11:57:30.616046 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:31 crc kubenswrapper[4991]: E0929 11:57:31.595541 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:32 crc kubenswrapper[4991]: I0929 11:57:32.926362 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:57:32 crc kubenswrapper[4991]: E0929 11:57:32.926661 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:57:40 crc kubenswrapper[4991]: E0929 11:57:40.916135 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fa266a_945d_4e5e_ac27_fe9dd8198384.slice\": RecentStats: unable to find data in memory cache]" Sep 29 11:57:45 crc kubenswrapper[4991]: I0929 11:57:45.926562 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:57:45 crc kubenswrapper[4991]: E0929 11:57:45.927406 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:57:59 crc kubenswrapper[4991]: I0929 11:57:59.926910 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:57:59 crc kubenswrapper[4991]: E0929 11:57:59.928115 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:58:14 crc kubenswrapper[4991]: I0929 11:58:14.935711 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:58:14 crc kubenswrapper[4991]: E0929 11:58:14.936866 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:58:27 crc kubenswrapper[4991]: I0929 11:58:27.925896 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:58:27 crc kubenswrapper[4991]: E0929 11:58:27.926581 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 11:58:41 crc kubenswrapper[4991]: I0929 11:58:41.926290 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 11:58:42 crc kubenswrapper[4991]: I0929 11:58:42.178240 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d"} Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.638873 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.639968 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="extract-utilities" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.639984 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="extract-utilities" Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.640000 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="extract-content" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640007 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="extract-content" Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.640046 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="extract-content" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640056 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="extract-content" Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.640068 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640074 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.640113 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="extract-utilities" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640121 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="extract-utilities" Sep 29 11:58:50 crc kubenswrapper[4991]: E0929 11:58:50.640140 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640147 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640435 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4dd7182-c1a5-43e5-8738-049ade97e75b" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.640467 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fa266a-945d-4e5e-ac27-fe9dd8198384" containerName="registry-server" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.642143 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.662683 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.781363 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.781439 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knln\" (UniqueName: \"kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.781559 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.884013 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.884084 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knln\" (UniqueName: \"kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.884136 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.884818 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.885057 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:50 crc kubenswrapper[4991]: I0929 11:58:50.906277 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knln\" (UniqueName: \"kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln\") pod \"redhat-operators-wsh2k\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:51 crc kubenswrapper[4991]: I0929 11:58:51.013731 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:58:51 crc kubenswrapper[4991]: I0929 11:58:51.515862 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:58:52 crc kubenswrapper[4991]: I0929 11:58:52.316273 4991 generic.go:334] "Generic (PLEG): container finished" podID="24e26a0b-58be-4559-9489-b714d9591d70" containerID="517eb3970e62971042f5c7c248276ed2ba4086a587c5a4fdb5891cb871b27db6" exitCode=0 Sep 29 11:58:52 crc kubenswrapper[4991]: I0929 11:58:52.316365 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerDied","Data":"517eb3970e62971042f5c7c248276ed2ba4086a587c5a4fdb5891cb871b27db6"} Sep 29 11:58:52 crc kubenswrapper[4991]: I0929 11:58:52.316674 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerStarted","Data":"a55fcd25c8f844e7c38e01a4e6091133582999fec44c385a4f5c681fd03a0637"} Sep 29 11:58:53 crc kubenswrapper[4991]: I0929 11:58:53.331782 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerStarted","Data":"0a57d43e8269fdf94fba0282c46b2e68532819f3a0ed296a3080f627f1233be4"} Sep 29 11:58:57 crc kubenswrapper[4991]: I0929 11:58:57.377314 4991 generic.go:334] "Generic (PLEG): container finished" podID="24e26a0b-58be-4559-9489-b714d9591d70" containerID="0a57d43e8269fdf94fba0282c46b2e68532819f3a0ed296a3080f627f1233be4" exitCode=0 Sep 29 11:58:57 crc kubenswrapper[4991]: I0929 11:58:57.377386 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerDied","Data":"0a57d43e8269fdf94fba0282c46b2e68532819f3a0ed296a3080f627f1233be4"} Sep 29 11:58:59 crc kubenswrapper[4991]: I0929 11:58:59.408292 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerStarted","Data":"b9a4ef79486f0b8d8aa681e00c6bbbd1aa90c3729a76a39ea088fd18fe255370"} Sep 29 11:58:59 crc kubenswrapper[4991]: I0929 11:58:59.432332 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsh2k" podStartSLOduration=3.266847018 podStartE2EDuration="9.432310076s" podCreationTimestamp="2025-09-29 11:58:50 +0000 UTC" firstStartedPulling="2025-09-29 11:58:52.318028865 +0000 UTC m=+8468.173956893" lastFinishedPulling="2025-09-29 11:58:58.483491923 +0000 UTC m=+8474.339419951" observedRunningTime="2025-09-29 11:58:59.42518635 +0000 UTC m=+8475.281114378" watchObservedRunningTime="2025-09-29 11:58:59.432310076 +0000 UTC m=+8475.288238104" Sep 29 11:59:01 crc kubenswrapper[4991]: I0929 11:59:01.014723 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:01 crc kubenswrapper[4991]: I0929 11:59:01.014779 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:02 crc kubenswrapper[4991]: I0929 11:59:02.082920 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsh2k" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="registry-server" probeResult="failure" output=< Sep 29 11:59:02 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 11:59:02 crc kubenswrapper[4991]: > Sep 29 11:59:11 crc kubenswrapper[4991]: I0929 11:59:11.070884 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:11 crc kubenswrapper[4991]: I0929 11:59:11.128140 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:11 crc kubenswrapper[4991]: I0929 11:59:11.314908 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:59:12 crc kubenswrapper[4991]: I0929 11:59:12.554152 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsh2k" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="registry-server" containerID="cri-o://b9a4ef79486f0b8d8aa681e00c6bbbd1aa90c3729a76a39ea088fd18fe255370" gracePeriod=2 Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.568241 4991 generic.go:334] "Generic (PLEG): container finished" podID="24e26a0b-58be-4559-9489-b714d9591d70" containerID="b9a4ef79486f0b8d8aa681e00c6bbbd1aa90c3729a76a39ea088fd18fe255370" exitCode=0 Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.568301 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerDied","Data":"b9a4ef79486f0b8d8aa681e00c6bbbd1aa90c3729a76a39ea088fd18fe255370"} Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.568915 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsh2k" event={"ID":"24e26a0b-58be-4559-9489-b714d9591d70","Type":"ContainerDied","Data":"a55fcd25c8f844e7c38e01a4e6091133582999fec44c385a4f5c681fd03a0637"} Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.568932 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55fcd25c8f844e7c38e01a4e6091133582999fec44c385a4f5c681fd03a0637" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.607250 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.684285 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knln\" (UniqueName: \"kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln\") pod \"24e26a0b-58be-4559-9489-b714d9591d70\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.684456 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities\") pod \"24e26a0b-58be-4559-9489-b714d9591d70\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.684535 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content\") pod \"24e26a0b-58be-4559-9489-b714d9591d70\" (UID: \"24e26a0b-58be-4559-9489-b714d9591d70\") " Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.686267 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities" (OuterVolumeSpecName: "utilities") pod "24e26a0b-58be-4559-9489-b714d9591d70" (UID: "24e26a0b-58be-4559-9489-b714d9591d70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.707242 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln" (OuterVolumeSpecName: "kube-api-access-2knln") pod "24e26a0b-58be-4559-9489-b714d9591d70" (UID: "24e26a0b-58be-4559-9489-b714d9591d70"). InnerVolumeSpecName "kube-api-access-2knln". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.776234 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24e26a0b-58be-4559-9489-b714d9591d70" (UID: "24e26a0b-58be-4559-9489-b714d9591d70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.787430 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knln\" (UniqueName: \"kubernetes.io/projected/24e26a0b-58be-4559-9489-b714d9591d70-kube-api-access-2knln\") on node \"crc\" DevicePath \"\"" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.787464 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 11:59:13 crc kubenswrapper[4991]: I0929 11:59:13.787474 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24e26a0b-58be-4559-9489-b714d9591d70-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 11:59:14 crc kubenswrapper[4991]: I0929 11:59:14.580505 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsh2k" Sep 29 11:59:14 crc kubenswrapper[4991]: I0929 11:59:14.621238 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:59:14 crc kubenswrapper[4991]: I0929 11:59:14.631389 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsh2k"] Sep 29 11:59:14 crc kubenswrapper[4991]: I0929 11:59:14.939397 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e26a0b-58be-4559-9489-b714d9591d70" path="/var/lib/kubelet/pods/24e26a0b-58be-4559-9489-b714d9591d70/volumes" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.160538 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w"] Sep 29 12:00:00 crc kubenswrapper[4991]: E0929 12:00:00.161761 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="registry-server" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.161785 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="registry-server" Sep 29 12:00:00 crc kubenswrapper[4991]: E0929 12:00:00.161851 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="extract-content" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.161862 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="extract-content" Sep 29 12:00:00 crc kubenswrapper[4991]: E0929 12:00:00.161875 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="extract-utilities" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.161886 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="extract-utilities" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.162258 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e26a0b-58be-4559-9489-b714d9591d70" containerName="registry-server" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.163334 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.167707 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.167927 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.175276 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w"] Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.326625 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.326762 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.326843 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw7h\" (UniqueName: \"kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.429574 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.429682 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw7h\" (UniqueName: \"kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.429809 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.430568 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.436771 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.450171 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw7h\" (UniqueName: \"kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h\") pod \"collect-profiles-29319120-w5k9w\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.494021 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:00 crc kubenswrapper[4991]: I0929 12:00:00.966420 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w"] Sep 29 12:00:01 crc kubenswrapper[4991]: I0929 12:00:01.127803 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" event={"ID":"24c8a25f-3b4b-4750-b199-ea2c15bcd236","Type":"ContainerStarted","Data":"cb6a66febc2ca6f6f5b88bb70c48547ccd10f75bb8da06271232ced8305caf55"} Sep 29 12:00:02 crc kubenswrapper[4991]: I0929 12:00:02.140051 4991 generic.go:334] "Generic (PLEG): container finished" podID="24c8a25f-3b4b-4750-b199-ea2c15bcd236" containerID="a846b64e36801ce73ea6e55bd503308263efd9464807f0e6e9802ff0c1f1ef32" exitCode=0 Sep 29 12:00:02 crc kubenswrapper[4991]: I0929 12:00:02.140260 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" event={"ID":"24c8a25f-3b4b-4750-b199-ea2c15bcd236","Type":"ContainerDied","Data":"a846b64e36801ce73ea6e55bd503308263efd9464807f0e6e9802ff0c1f1ef32"} Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.630551 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.705530 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvw7h\" (UniqueName: \"kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h\") pod \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.705657 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume\") pod \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.705735 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume\") pod \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\" (UID: \"24c8a25f-3b4b-4750-b199-ea2c15bcd236\") " Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.706452 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume" (OuterVolumeSpecName: "config-volume") pod "24c8a25f-3b4b-4750-b199-ea2c15bcd236" (UID: "24c8a25f-3b4b-4750-b199-ea2c15bcd236"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.706921 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c8a25f-3b4b-4750-b199-ea2c15bcd236-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.712309 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24c8a25f-3b4b-4750-b199-ea2c15bcd236" (UID: "24c8a25f-3b4b-4750-b199-ea2c15bcd236"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.712515 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h" (OuterVolumeSpecName: "kube-api-access-zvw7h") pod "24c8a25f-3b4b-4750-b199-ea2c15bcd236" (UID: "24c8a25f-3b4b-4750-b199-ea2c15bcd236"). InnerVolumeSpecName "kube-api-access-zvw7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.808359 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvw7h\" (UniqueName: \"kubernetes.io/projected/24c8a25f-3b4b-4750-b199-ea2c15bcd236-kube-api-access-zvw7h\") on node \"crc\" DevicePath \"\"" Sep 29 12:00:03 crc kubenswrapper[4991]: I0929 12:00:03.808399 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24c8a25f-3b4b-4750-b199-ea2c15bcd236-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.161503 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" event={"ID":"24c8a25f-3b4b-4750-b199-ea2c15bcd236","Type":"ContainerDied","Data":"cb6a66febc2ca6f6f5b88bb70c48547ccd10f75bb8da06271232ced8305caf55"} Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.161837 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6a66febc2ca6f6f5b88bb70c48547ccd10f75bb8da06271232ced8305caf55" Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.161559 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w" Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.731486 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf"] Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.743260 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319075-zp9lf"] Sep 29 12:00:04 crc kubenswrapper[4991]: I0929 12:00:04.941289 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd47f281-9c56-43d9-a327-776f4fc18e68" path="/var/lib/kubelet/pods/bd47f281-9c56-43d9-a327-776f4fc18e68/volumes" Sep 29 12:00:27 crc kubenswrapper[4991]: I0929 12:00:27.151003 4991 scope.go:117] "RemoveContainer" containerID="6a55b492252da2d5d89395bf7680fca6a2f8464f3194c7973d2d719a3822371d" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.166163 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29319121-9z692"] Sep 29 12:01:00 crc kubenswrapper[4991]: E0929 12:01:00.167391 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c8a25f-3b4b-4750-b199-ea2c15bcd236" containerName="collect-profiles" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.167409 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c8a25f-3b4b-4750-b199-ea2c15bcd236" containerName="collect-profiles" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.167672 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c8a25f-3b4b-4750-b199-ea2c15bcd236" containerName="collect-profiles" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.168657 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.181656 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319121-9z692"] Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.316998 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.317330 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5lg\" (UniqueName: \"kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.317465 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.317522 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.419744 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.419877 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.420034 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.420076 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5lg\" (UniqueName: \"kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.426750 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.426981 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.428388 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.441504 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5lg\" (UniqueName: \"kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg\") pod \"keystone-cron-29319121-9z692\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:00 crc kubenswrapper[4991]: I0929 12:01:00.486975 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:01 crc kubenswrapper[4991]: I0929 12:01:01.007072 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29319121-9z692"] Sep 29 12:01:01 crc kubenswrapper[4991]: I0929 12:01:01.826569 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319121-9z692" event={"ID":"b975c3b8-401b-49cd-be70-93912c9a61da","Type":"ContainerStarted","Data":"9d26e986ce9130436d2e99cd74644aec45f4a74d87cc0702ffe3f2bd6cfc33c1"} Sep 29 12:01:01 crc kubenswrapper[4991]: I0929 12:01:01.826880 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319121-9z692" event={"ID":"b975c3b8-401b-49cd-be70-93912c9a61da","Type":"ContainerStarted","Data":"1747493e89c4fc082dd7491816e551482eadb87068f790626a1971c2a8124a79"} Sep 29 12:01:01 crc kubenswrapper[4991]: I0929 12:01:01.851912 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29319121-9z692" podStartSLOduration=1.851884729 podStartE2EDuration="1.851884729s" podCreationTimestamp="2025-09-29 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:01:01.844382523 +0000 UTC m=+8597.700310551" watchObservedRunningTime="2025-09-29 12:01:01.851884729 +0000 UTC m=+8597.707812767" Sep 29 12:01:04 crc kubenswrapper[4991]: I0929 12:01:04.873770 4991 generic.go:334] "Generic (PLEG): container finished" podID="b975c3b8-401b-49cd-be70-93912c9a61da" containerID="9d26e986ce9130436d2e99cd74644aec45f4a74d87cc0702ffe3f2bd6cfc33c1" exitCode=0 Sep 29 12:01:04 crc kubenswrapper[4991]: I0929 12:01:04.873852 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319121-9z692" event={"ID":"b975c3b8-401b-49cd-be70-93912c9a61da","Type":"ContainerDied","Data":"9d26e986ce9130436d2e99cd74644aec45f4a74d87cc0702ffe3f2bd6cfc33c1"} Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.304689 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.372573 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data\") pod \"b975c3b8-401b-49cd-be70-93912c9a61da\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.372810 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys\") pod \"b975c3b8-401b-49cd-be70-93912c9a61da\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.372902 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5lg\" (UniqueName: \"kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg\") pod \"b975c3b8-401b-49cd-be70-93912c9a61da\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.373002 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle\") pod \"b975c3b8-401b-49cd-be70-93912c9a61da\" (UID: \"b975c3b8-401b-49cd-be70-93912c9a61da\") " Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.387125 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b975c3b8-401b-49cd-be70-93912c9a61da" (UID: "b975c3b8-401b-49cd-be70-93912c9a61da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.387226 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg" (OuterVolumeSpecName: "kube-api-access-cv5lg") pod "b975c3b8-401b-49cd-be70-93912c9a61da" (UID: "b975c3b8-401b-49cd-be70-93912c9a61da"). InnerVolumeSpecName "kube-api-access-cv5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.415510 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b975c3b8-401b-49cd-be70-93912c9a61da" (UID: "b975c3b8-401b-49cd-be70-93912c9a61da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.444537 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data" (OuterVolumeSpecName: "config-data") pod "b975c3b8-401b-49cd-be70-93912c9a61da" (UID: "b975c3b8-401b-49cd-be70-93912c9a61da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.476501 4991 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.476548 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv5lg\" (UniqueName: \"kubernetes.io/projected/b975c3b8-401b-49cd-be70-93912c9a61da-kube-api-access-cv5lg\") on node \"crc\" DevicePath \"\"" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.476564 4991 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.476576 4991 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b975c3b8-401b-49cd-be70-93912c9a61da-config-data\") on node \"crc\" DevicePath \"\"" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.895681 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29319121-9z692" event={"ID":"b975c3b8-401b-49cd-be70-93912c9a61da","Type":"ContainerDied","Data":"1747493e89c4fc082dd7491816e551482eadb87068f790626a1971c2a8124a79"} Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.895760 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1747493e89c4fc082dd7491816e551482eadb87068f790626a1971c2a8124a79" Sep 29 12:01:06 crc kubenswrapper[4991]: I0929 12:01:06.895762 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29319121-9z692" Sep 29 12:01:07 crc kubenswrapper[4991]: I0929 12:01:07.946874 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:01:07 crc kubenswrapper[4991]: I0929 12:01:07.947274 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:01:37 crc kubenswrapper[4991]: I0929 12:01:37.946569 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:01:37 crc kubenswrapper[4991]: I0929 12:01:37.947469 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:02:07 crc kubenswrapper[4991]: I0929 12:02:07.947168 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:02:07 crc kubenswrapper[4991]: I0929 12:02:07.947711 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:02:07 crc kubenswrapper[4991]: I0929 12:02:07.947753 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:02:07 crc kubenswrapper[4991]: I0929 12:02:07.948665 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:02:07 crc kubenswrapper[4991]: I0929 12:02:07.948720 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d" gracePeriod=600 Sep 29 12:02:08 crc kubenswrapper[4991]: I0929 12:02:08.598117 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d" exitCode=0 Sep 29 12:02:08 crc kubenswrapper[4991]: I0929 12:02:08.598205 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d"} Sep 29 12:02:08 crc kubenswrapper[4991]: I0929 12:02:08.598806 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b"} Sep 29 12:02:08 crc kubenswrapper[4991]: I0929 12:02:08.598837 4991 scope.go:117] "RemoveContainer" containerID="558c85f57f840c0de53830a2a0472c0aa2b808cbf7c4a64296caa7a82578ced6" Sep 29 12:02:27 crc kubenswrapper[4991]: I0929 12:02:27.265444 4991 scope.go:117] "RemoveContainer" containerID="264f9fb1efe457dc8f42774422d9e70e15bc5fa190434c18e19b229b215967cc" Sep 29 12:02:27 crc kubenswrapper[4991]: I0929 12:02:27.300783 4991 scope.go:117] "RemoveContainer" containerID="0c31914fe0a8c883b1b8b1782c55aa932177a69465b93f51e81c4c50e06449ce" Sep 29 12:02:27 crc kubenswrapper[4991]: I0929 12:02:27.351682 4991 scope.go:117] "RemoveContainer" containerID="7110f2f08617aee4ef9baeeff957446b8f58cc57c4162d0e267ab779be45dc61" Sep 29 12:04:37 crc kubenswrapper[4991]: I0929 12:04:37.947135 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:04:37 crc kubenswrapper[4991]: I0929 12:04:37.947891 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:05:07 crc kubenswrapper[4991]: I0929 12:05:07.946789 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:05:07 crc kubenswrapper[4991]: I0929 12:05:07.947410 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:05:27 crc kubenswrapper[4991]: I0929 12:05:27.500534 4991 scope.go:117] "RemoveContainer" containerID="517eb3970e62971042f5c7c248276ed2ba4086a587c5a4fdb5891cb871b27db6" Sep 29 12:05:27 crc kubenswrapper[4991]: I0929 12:05:27.543185 4991 scope.go:117] "RemoveContainer" containerID="b9a4ef79486f0b8d8aa681e00c6bbbd1aa90c3729a76a39ea088fd18fe255370" Sep 29 12:05:27 crc kubenswrapper[4991]: I0929 12:05:27.633613 4991 scope.go:117] "RemoveContainer" containerID="0a57d43e8269fdf94fba0282c46b2e68532819f3a0ed296a3080f627f1233be4" Sep 29 12:05:37 crc kubenswrapper[4991]: I0929 12:05:37.947555 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:05:37 crc kubenswrapper[4991]: I0929 12:05:37.948284 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:05:37 crc kubenswrapper[4991]: I0929 12:05:37.948349 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:05:37 crc kubenswrapper[4991]: I0929 12:05:37.950056 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:05:37 crc kubenswrapper[4991]: I0929 12:05:37.950306 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" gracePeriod=600 Sep 29 12:05:38 crc kubenswrapper[4991]: E0929 12:05:38.072298 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:05:39 crc kubenswrapper[4991]: I0929 12:05:39.006652 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" exitCode=0 Sep 29 12:05:39 crc kubenswrapper[4991]: I0929 12:05:39.006703 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b"} Sep 29 12:05:39 crc kubenswrapper[4991]: I0929 12:05:39.006743 4991 scope.go:117] "RemoveContainer" containerID="8e19d040b6e841ece3d54e9ed8988f8bed6f11f22b08f090ee43dabd4a63f94d" Sep 29 12:05:39 crc kubenswrapper[4991]: I0929 12:05:39.007624 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:05:39 crc kubenswrapper[4991]: E0929 12:05:39.007923 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.030446 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:05:51 crc kubenswrapper[4991]: E0929 12:05:51.031519 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b975c3b8-401b-49cd-be70-93912c9a61da" containerName="keystone-cron" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.031532 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="b975c3b8-401b-49cd-be70-93912c9a61da" containerName="keystone-cron" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.031737 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="b975c3b8-401b-49cd-be70-93912c9a61da" containerName="keystone-cron" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.035631 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.048702 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.165736 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.165798 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczhh\" (UniqueName: \"kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.165845 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.268747 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.269056 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczhh\" (UniqueName: \"kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.269098 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.269318 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.269567 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.287160 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczhh\" (UniqueName: \"kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh\") pod \"community-operators-kr9j5\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.371636 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:05:51 crc kubenswrapper[4991]: I0929 12:05:51.891241 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:05:52 crc kubenswrapper[4991]: I0929 12:05:52.162308 4991 generic.go:334] "Generic (PLEG): container finished" podID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerID="b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac" exitCode=0 Sep 29 12:05:52 crc kubenswrapper[4991]: I0929 12:05:52.162384 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerDied","Data":"b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac"} Sep 29 12:05:52 crc kubenswrapper[4991]: I0929 12:05:52.163448 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerStarted","Data":"adfb1c06a1bf0fd430eba9e3f77eceb16c2334c27eb979332f8b6e48d0fabe4c"} Sep 29 12:05:52 crc kubenswrapper[4991]: I0929 12:05:52.165870 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 12:05:53 crc kubenswrapper[4991]: I0929 12:05:53.174318 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerStarted","Data":"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285"} Sep 29 12:05:53 crc kubenswrapper[4991]: I0929 12:05:53.927051 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:05:53 crc kubenswrapper[4991]: E0929 12:05:53.927388 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:05:54 crc kubenswrapper[4991]: I0929 12:05:54.186659 4991 generic.go:334] "Generic (PLEG): container finished" podID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerID="b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285" exitCode=0 Sep 29 12:05:54 crc kubenswrapper[4991]: I0929 12:05:54.186734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerDied","Data":"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285"} Sep 29 12:05:55 crc kubenswrapper[4991]: I0929 12:05:55.236259 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerStarted","Data":"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5"} Sep 29 12:05:55 crc kubenswrapper[4991]: I0929 12:05:55.268402 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kr9j5" podStartSLOduration=2.719636803 podStartE2EDuration="5.268380328s" podCreationTimestamp="2025-09-29 12:05:50 +0000 UTC" firstStartedPulling="2025-09-29 12:05:52.165607142 +0000 UTC m=+8888.021535170" lastFinishedPulling="2025-09-29 12:05:54.714350647 +0000 UTC m=+8890.570278695" observedRunningTime="2025-09-29 12:05:55.256476198 +0000 UTC m=+8891.112404266" watchObservedRunningTime="2025-09-29 12:05:55.268380328 +0000 UTC m=+8891.124308356" Sep 29 12:06:01 crc kubenswrapper[4991]: I0929 12:06:01.372149 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:01 crc kubenswrapper[4991]: I0929 12:06:01.374690 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:01 crc kubenswrapper[4991]: I0929 12:06:01.427416 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:02 crc kubenswrapper[4991]: I0929 12:06:02.400285 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:02 crc kubenswrapper[4991]: I0929 12:06:02.455159 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:06:04 crc kubenswrapper[4991]: I0929 12:06:04.339497 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kr9j5" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="registry-server" containerID="cri-o://2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5" gracePeriod=2 Sep 29 12:06:04 crc kubenswrapper[4991]: I0929 12:06:04.898226 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.016875 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczhh\" (UniqueName: \"kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh\") pod \"1b4943b5-2b64-4e61-af77-bb78b724cb67\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.017160 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content\") pod \"1b4943b5-2b64-4e61-af77-bb78b724cb67\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.017330 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities\") pod \"1b4943b5-2b64-4e61-af77-bb78b724cb67\" (UID: \"1b4943b5-2b64-4e61-af77-bb78b724cb67\") " Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.018703 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities" (OuterVolumeSpecName: "utilities") pod "1b4943b5-2b64-4e61-af77-bb78b724cb67" (UID: "1b4943b5-2b64-4e61-af77-bb78b724cb67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.023342 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh" (OuterVolumeSpecName: "kube-api-access-xczhh") pod "1b4943b5-2b64-4e61-af77-bb78b724cb67" (UID: "1b4943b5-2b64-4e61-af77-bb78b724cb67"). InnerVolumeSpecName "kube-api-access-xczhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.072190 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b4943b5-2b64-4e61-af77-bb78b724cb67" (UID: "1b4943b5-2b64-4e61-af77-bb78b724cb67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.120208 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.120476 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4943b5-2b64-4e61-af77-bb78b724cb67-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.120556 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczhh\" (UniqueName: \"kubernetes.io/projected/1b4943b5-2b64-4e61-af77-bb78b724cb67-kube-api-access-xczhh\") on node \"crc\" DevicePath \"\"" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.352873 4991 generic.go:334] "Generic (PLEG): container finished" podID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerID="2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5" exitCode=0 Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.353013 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kr9j5" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.352972 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerDied","Data":"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5"} Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.353432 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kr9j5" event={"ID":"1b4943b5-2b64-4e61-af77-bb78b724cb67","Type":"ContainerDied","Data":"adfb1c06a1bf0fd430eba9e3f77eceb16c2334c27eb979332f8b6e48d0fabe4c"} Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.353472 4991 scope.go:117] "RemoveContainer" containerID="2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.381126 4991 scope.go:117] "RemoveContainer" containerID="b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.422242 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.435834 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kr9j5"] Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.478224 4991 scope.go:117] "RemoveContainer" containerID="b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.508469 4991 scope.go:117] "RemoveContainer" containerID="2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5" Sep 29 12:06:05 crc kubenswrapper[4991]: E0929 12:06:05.509227 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5\": container with ID starting with 2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5 not found: ID does not exist" containerID="2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.509276 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5"} err="failed to get container status \"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5\": rpc error: code = NotFound desc = could not find container \"2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5\": container with ID starting with 2178a5a98186f534bbd671e934029014659f4ca4b48797b34d6541d8b57633b5 not found: ID does not exist" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.509308 4991 scope.go:117] "RemoveContainer" containerID="b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285" Sep 29 12:06:05 crc kubenswrapper[4991]: E0929 12:06:05.509777 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285\": container with ID starting with b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285 not found: ID does not exist" containerID="b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.509816 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285"} err="failed to get container status \"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285\": rpc error: code = NotFound desc = could not find container \"b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285\": container with ID starting with b5ad75e0895f1000ed0627245456f3cebc77460573778c22a6ab67acc4673285 not found: ID does not exist" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.509843 4991 scope.go:117] "RemoveContainer" containerID="b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac" Sep 29 12:06:05 crc kubenswrapper[4991]: E0929 12:06:05.510218 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac\": container with ID starting with b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac not found: ID does not exist" containerID="b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac" Sep 29 12:06:05 crc kubenswrapper[4991]: I0929 12:06:05.510244 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac"} err="failed to get container status \"b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac\": rpc error: code = NotFound desc = could not find container \"b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac\": container with ID starting with b2c3cff3431ad2dbcd2b88c5dc5276aeaa9bb86c27ca2117a8b1e3e8493f39ac not found: ID does not exist" Sep 29 12:06:06 crc kubenswrapper[4991]: I0929 12:06:06.926433 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:06:06 crc kubenswrapper[4991]: E0929 12:06:06.926984 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:06:06 crc kubenswrapper[4991]: I0929 12:06:06.942841 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" path="/var/lib/kubelet/pods/1b4943b5-2b64-4e61-af77-bb78b724cb67/volumes" Sep 29 12:06:18 crc kubenswrapper[4991]: I0929 12:06:18.926752 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:06:18 crc kubenswrapper[4991]: E0929 12:06:18.927694 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:06:29 crc kubenswrapper[4991]: I0929 12:06:29.926699 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:06:29 crc kubenswrapper[4991]: E0929 12:06:29.927527 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:06:43 crc kubenswrapper[4991]: I0929 12:06:43.926801 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:06:43 crc kubenswrapper[4991]: E0929 12:06:43.927570 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:06:54 crc kubenswrapper[4991]: I0929 12:06:54.934769 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:06:54 crc kubenswrapper[4991]: E0929 12:06:54.937018 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:07:05 crc kubenswrapper[4991]: I0929 12:07:05.927135 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:07:05 crc kubenswrapper[4991]: E0929 12:07:05.927907 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.792215 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:13 crc kubenswrapper[4991]: E0929 12:07:13.794673 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="extract-content" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.794699 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="extract-content" Sep 29 12:07:13 crc kubenswrapper[4991]: E0929 12:07:13.794711 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="extract-utilities" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.794719 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="extract-utilities" Sep 29 12:07:13 crc kubenswrapper[4991]: E0929 12:07:13.794791 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="registry-server" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.794798 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="registry-server" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.795478 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4943b5-2b64-4e61-af77-bb78b724cb67" containerName="registry-server" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.797817 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.806620 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.947221 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8kr\" (UniqueName: \"kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.947297 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:13 crc kubenswrapper[4991]: I0929 12:07:13.947327 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.049449 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8kr\" (UniqueName: \"kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.049527 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.049548 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.050241 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.050357 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.076389 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8kr\" (UniqueName: \"kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr\") pod \"certified-operators-9nkxn\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.139871 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:14 crc kubenswrapper[4991]: I0929 12:07:14.681408 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:15 crc kubenswrapper[4991]: I0929 12:07:15.142173 4991 generic.go:334] "Generic (PLEG): container finished" podID="bf811574-1678-4eda-9979-92731f78d2fc" containerID="fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d" exitCode=0 Sep 29 12:07:15 crc kubenswrapper[4991]: I0929 12:07:15.142241 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerDied","Data":"fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d"} Sep 29 12:07:15 crc kubenswrapper[4991]: I0929 12:07:15.142547 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerStarted","Data":"3d3a6a9b54b5d46b54086af18d565522fc696a4fd123bf73d929603e0220475f"} Sep 29 12:07:16 crc kubenswrapper[4991]: I0929 12:07:16.166475 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerStarted","Data":"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5"} Sep 29 12:07:18 crc kubenswrapper[4991]: I0929 12:07:18.190914 4991 generic.go:334] "Generic (PLEG): container finished" podID="bf811574-1678-4eda-9979-92731f78d2fc" containerID="ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5" exitCode=0 Sep 29 12:07:18 crc kubenswrapper[4991]: I0929 12:07:18.191017 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerDied","Data":"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5"} Sep 29 12:07:19 crc kubenswrapper[4991]: I0929 12:07:19.204522 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerStarted","Data":"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197"} Sep 29 12:07:19 crc kubenswrapper[4991]: I0929 12:07:19.242201 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nkxn" podStartSLOduration=2.545920349 podStartE2EDuration="6.242178228s" podCreationTimestamp="2025-09-29 12:07:13 +0000 UTC" firstStartedPulling="2025-09-29 12:07:15.144021591 +0000 UTC m=+8970.999949619" lastFinishedPulling="2025-09-29 12:07:18.84027947 +0000 UTC m=+8974.696207498" observedRunningTime="2025-09-29 12:07:19.233191923 +0000 UTC m=+8975.089119951" watchObservedRunningTime="2025-09-29 12:07:19.242178228 +0000 UTC m=+8975.098106256" Sep 29 12:07:20 crc kubenswrapper[4991]: I0929 12:07:20.927555 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:07:20 crc kubenswrapper[4991]: E0929 12:07:20.928365 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:07:24 crc kubenswrapper[4991]: I0929 12:07:24.140921 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:24 crc kubenswrapper[4991]: I0929 12:07:24.141713 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:24 crc kubenswrapper[4991]: I0929 12:07:24.236175 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:24 crc kubenswrapper[4991]: I0929 12:07:24.331672 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:24 crc kubenswrapper[4991]: I0929 12:07:24.486638 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:26 crc kubenswrapper[4991]: I0929 12:07:26.302033 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nkxn" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="registry-server" containerID="cri-o://b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197" gracePeriod=2 Sep 29 12:07:26 crc kubenswrapper[4991]: I0929 12:07:26.885541 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.002829 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp8kr\" (UniqueName: \"kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr\") pod \"bf811574-1678-4eda-9979-92731f78d2fc\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.003076 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content\") pod \"bf811574-1678-4eda-9979-92731f78d2fc\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.003142 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities\") pod \"bf811574-1678-4eda-9979-92731f78d2fc\" (UID: \"bf811574-1678-4eda-9979-92731f78d2fc\") " Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.004550 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities" (OuterVolumeSpecName: "utilities") pod "bf811574-1678-4eda-9979-92731f78d2fc" (UID: "bf811574-1678-4eda-9979-92731f78d2fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.013099 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr" (OuterVolumeSpecName: "kube-api-access-vp8kr") pod "bf811574-1678-4eda-9979-92731f78d2fc" (UID: "bf811574-1678-4eda-9979-92731f78d2fc"). InnerVolumeSpecName "kube-api-access-vp8kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.112653 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.112705 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp8kr\" (UniqueName: \"kubernetes.io/projected/bf811574-1678-4eda-9979-92731f78d2fc-kube-api-access-vp8kr\") on node \"crc\" DevicePath \"\"" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.251472 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf811574-1678-4eda-9979-92731f78d2fc" (UID: "bf811574-1678-4eda-9979-92731f78d2fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.319568 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf811574-1678-4eda-9979-92731f78d2fc-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.325354 4991 generic.go:334] "Generic (PLEG): container finished" podID="bf811574-1678-4eda-9979-92731f78d2fc" containerID="b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197" exitCode=0 Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.325485 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerDied","Data":"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197"} Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.325500 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nkxn" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.325657 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nkxn" event={"ID":"bf811574-1678-4eda-9979-92731f78d2fc","Type":"ContainerDied","Data":"3d3a6a9b54b5d46b54086af18d565522fc696a4fd123bf73d929603e0220475f"} Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.325799 4991 scope.go:117] "RemoveContainer" containerID="b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.355922 4991 scope.go:117] "RemoveContainer" containerID="ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.392902 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.405756 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nkxn"] Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.408215 4991 scope.go:117] "RemoveContainer" containerID="fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.459727 4991 scope.go:117] "RemoveContainer" containerID="b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197" Sep 29 12:07:27 crc kubenswrapper[4991]: E0929 12:07:27.460224 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197\": container with ID starting with b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197 not found: ID does not exist" containerID="b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.460257 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197"} err="failed to get container status \"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197\": rpc error: code = NotFound desc = could not find container \"b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197\": container with ID starting with b2a4581a93ac4371ae91349b796b51da7969325ad196bd6388bd5338edc00197 not found: ID does not exist" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.460279 4991 scope.go:117] "RemoveContainer" containerID="ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5" Sep 29 12:07:27 crc kubenswrapper[4991]: E0929 12:07:27.460585 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5\": container with ID starting with ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5 not found: ID does not exist" containerID="ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.460628 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5"} err="failed to get container status \"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5\": rpc error: code = NotFound desc = could not find container \"ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5\": container with ID starting with ac4314fb71770607f1b7a66cdbc148c151e5092dce3af3bacaaf8f123af251e5 not found: ID does not exist" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.460657 4991 scope.go:117] "RemoveContainer" containerID="fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d" Sep 29 12:07:27 crc kubenswrapper[4991]: E0929 12:07:27.461000 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d\": container with ID starting with fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d not found: ID does not exist" containerID="fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d" Sep 29 12:07:27 crc kubenswrapper[4991]: I0929 12:07:27.461048 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d"} err="failed to get container status \"fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d\": rpc error: code = NotFound desc = could not find container \"fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d\": container with ID starting with fa1fc54591ad89d62a2ab194c6d5e380e6e22a2356b537f383895d10acf3f27d not found: ID does not exist" Sep 29 12:07:28 crc kubenswrapper[4991]: I0929 12:07:28.946252 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf811574-1678-4eda-9979-92731f78d2fc" path="/var/lib/kubelet/pods/bf811574-1678-4eda-9979-92731f78d2fc/volumes" Sep 29 12:07:31 crc kubenswrapper[4991]: I0929 12:07:31.926720 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:07:31 crc kubenswrapper[4991]: E0929 12:07:31.927004 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:07:42 crc kubenswrapper[4991]: I0929 12:07:42.927122 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:07:42 crc kubenswrapper[4991]: E0929 12:07:42.927885 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:07:55 crc kubenswrapper[4991]: I0929 12:07:55.926284 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:07:55 crc kubenswrapper[4991]: E0929 12:07:55.927154 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:08:07 crc kubenswrapper[4991]: I0929 12:08:07.926164 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:08:07 crc kubenswrapper[4991]: E0929 12:08:07.927024 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:08:21 crc kubenswrapper[4991]: I0929 12:08:21.927387 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:08:21 crc kubenswrapper[4991]: E0929 12:08:21.928066 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:08:36 crc kubenswrapper[4991]: I0929 12:08:36.926785 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:08:36 crc kubenswrapper[4991]: E0929 12:08:36.928375 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:08:49 crc kubenswrapper[4991]: I0929 12:08:49.927279 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:08:49 crc kubenswrapper[4991]: E0929 12:08:49.927839 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:09:03 crc kubenswrapper[4991]: I0929 12:09:03.926181 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:09:03 crc kubenswrapper[4991]: E0929 12:09:03.926887 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:09:15 crc kubenswrapper[4991]: I0929 12:09:15.927684 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:09:15 crc kubenswrapper[4991]: E0929 12:09:15.928720 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:09:27 crc kubenswrapper[4991]: I0929 12:09:27.926680 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:09:27 crc kubenswrapper[4991]: E0929 12:09:27.927492 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:09:39 crc kubenswrapper[4991]: I0929 12:09:39.926704 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:09:39 crc kubenswrapper[4991]: E0929 12:09:39.927523 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:09:50 crc kubenswrapper[4991]: I0929 12:09:50.928065 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:09:50 crc kubenswrapper[4991]: E0929 12:09:50.929512 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:10:01 crc kubenswrapper[4991]: I0929 12:10:01.927334 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:10:01 crc kubenswrapper[4991]: E0929 12:10:01.928064 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:10:13 crc kubenswrapper[4991]: I0929 12:10:13.926023 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:10:13 crc kubenswrapper[4991]: E0929 12:10:13.926801 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:10:27 crc kubenswrapper[4991]: I0929 12:10:27.927464 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:10:27 crc kubenswrapper[4991]: E0929 12:10:27.929973 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:10:40 crc kubenswrapper[4991]: I0929 12:10:40.927264 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:10:41 crc kubenswrapper[4991]: I0929 12:10:41.563603 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467"} Sep 29 12:13:07 crc kubenswrapper[4991]: I0929 12:13:07.946911 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:13:07 crc kubenswrapper[4991]: I0929 12:13:07.949276 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:13:37 crc kubenswrapper[4991]: I0929 12:13:37.947549 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:13:37 crc kubenswrapper[4991]: I0929 12:13:37.948269 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:14:07 crc kubenswrapper[4991]: I0929 12:14:07.946914 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:14:07 crc kubenswrapper[4991]: I0929 12:14:07.947575 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:14:07 crc kubenswrapper[4991]: I0929 12:14:07.947624 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:14:07 crc kubenswrapper[4991]: I0929 12:14:07.948429 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:14:07 crc kubenswrapper[4991]: I0929 12:14:07.948482 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467" gracePeriod=600 Sep 29 12:14:08 crc kubenswrapper[4991]: I0929 12:14:08.080233 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467" exitCode=0 Sep 29 12:14:08 crc kubenswrapper[4991]: I0929 12:14:08.080372 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467"} Sep 29 12:14:08 crc kubenswrapper[4991]: I0929 12:14:08.080454 4991 scope.go:117] "RemoveContainer" containerID="b63d13b450b38436e598c6b185190164842f59112970e6d1fc4c297448c3c83b" Sep 29 12:14:09 crc kubenswrapper[4991]: I0929 12:14:09.095595 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647"} Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.790815 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:14:49 crc kubenswrapper[4991]: E0929 12:14:49.791991 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="extract-content" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.792010 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="extract-content" Sep 29 12:14:49 crc kubenswrapper[4991]: E0929 12:14:49.792035 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="registry-server" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.792042 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="registry-server" Sep 29 12:14:49 crc kubenswrapper[4991]: E0929 12:14:49.792064 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="extract-utilities" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.792071 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="extract-utilities" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.792342 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf811574-1678-4eda-9979-92731f78d2fc" containerName="registry-server" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.794330 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.814231 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.858012 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.858316 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.858822 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskm2\" (UniqueName: \"kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.961160 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskm2\" (UniqueName: \"kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.961554 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.961800 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.962157 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.962345 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:49 crc kubenswrapper[4991]: I0929 12:14:49.986250 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskm2\" (UniqueName: \"kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2\") pod \"redhat-marketplace-x8b79\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.118818 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.218161 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hx68r"] Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.221337 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.257149 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hx68r"] Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.379375 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-catalog-content\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.379912 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb57n\" (UniqueName: \"kubernetes.io/projected/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-kube-api-access-jb57n\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.380076 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-utilities\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.482218 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-catalog-content\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.482637 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb57n\" (UniqueName: \"kubernetes.io/projected/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-kube-api-access-jb57n\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.482694 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-utilities\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.483260 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-utilities\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.483462 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-catalog-content\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.510181 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb57n\" (UniqueName: \"kubernetes.io/projected/e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6-kube-api-access-jb57n\") pod \"redhat-operators-hx68r\" (UID: \"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6\") " pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.645777 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:14:50 crc kubenswrapper[4991]: I0929 12:14:50.893174 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.158585 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hx68r"] Sep 29 12:14:51 crc kubenswrapper[4991]: W0929 12:14:51.159401 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d9b18c_2bf1_4e66_88b1_b0e82c5650c6.slice/crio-2416469b4be56179211b483a004534686e8f57700a5fc482c8188d051cebc9d2 WatchSource:0}: Error finding container 2416469b4be56179211b483a004534686e8f57700a5fc482c8188d051cebc9d2: Status 404 returned error can't find the container with id 2416469b4be56179211b483a004534686e8f57700a5fc482c8188d051cebc9d2 Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.594570 4991 generic.go:334] "Generic (PLEG): container finished" podID="e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6" containerID="0f6f3905ba288bc81fe48b9de8b4cef85eae683fb8ed559f4d1209d8f59084ae" exitCode=0 Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.594634 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx68r" event={"ID":"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6","Type":"ContainerDied","Data":"0f6f3905ba288bc81fe48b9de8b4cef85eae683fb8ed559f4d1209d8f59084ae"} Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.594709 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx68r" event={"ID":"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6","Type":"ContainerStarted","Data":"2416469b4be56179211b483a004534686e8f57700a5fc482c8188d051cebc9d2"} Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.596521 4991 generic.go:334] "Generic (PLEG): container finished" podID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerID="cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff" exitCode=0 Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.596573 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerDied","Data":"cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff"} Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.596603 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerStarted","Data":"9a39b8707a56e1c5d375a231c24aa5c848def5080f183979658243afdb37027c"} Sep 29 12:14:51 crc kubenswrapper[4991]: I0929 12:14:51.597369 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 12:14:55 crc kubenswrapper[4991]: I0929 12:14:55.642228 4991 generic.go:334] "Generic (PLEG): container finished" podID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerID="3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677" exitCode=0 Sep 29 12:14:55 crc kubenswrapper[4991]: I0929 12:14:55.643007 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerDied","Data":"3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677"} Sep 29 12:14:56 crc kubenswrapper[4991]: I0929 12:14:56.663377 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerStarted","Data":"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d"} Sep 29 12:14:56 crc kubenswrapper[4991]: I0929 12:14:56.686282 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x8b79" podStartSLOduration=3.249683587 podStartE2EDuration="7.6862645s" podCreationTimestamp="2025-09-29 12:14:49 +0000 UTC" firstStartedPulling="2025-09-29 12:14:51.598546871 +0000 UTC m=+9427.454474899" lastFinishedPulling="2025-09-29 12:14:56.035127794 +0000 UTC m=+9431.891055812" observedRunningTime="2025-09-29 12:14:56.680679665 +0000 UTC m=+9432.536607693" watchObservedRunningTime="2025-09-29 12:14:56.6862645 +0000 UTC m=+9432.542192528" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.124137 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.124913 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.179879 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8"] Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.182226 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.187735 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.191280 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.202701 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8"] Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.255751 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.255890 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.255975 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfk8\" (UniqueName: \"kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.276232 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.358501 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.358619 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfk8\" (UniqueName: \"kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.358742 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.359469 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.374106 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.376232 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfk8\" (UniqueName: \"kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8\") pod \"collect-profiles-29319135-vkqf8\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:00 crc kubenswrapper[4991]: I0929 12:15:00.515845 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:01 crc kubenswrapper[4991]: W0929 12:15:01.582523 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920ae56f_37a4_4913_a161_d22037cdce92.slice/crio-f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72 WatchSource:0}: Error finding container f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72: Status 404 returned error can't find the container with id f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72 Sep 29 12:15:01 crc kubenswrapper[4991]: I0929 12:15:01.582606 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8"] Sep 29 12:15:01 crc kubenswrapper[4991]: I0929 12:15:01.723640 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" event={"ID":"920ae56f-37a4-4913-a161-d22037cdce92","Type":"ContainerStarted","Data":"f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72"} Sep 29 12:15:01 crc kubenswrapper[4991]: I0929 12:15:01.725407 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx68r" event={"ID":"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6","Type":"ContainerStarted","Data":"e946412fe28ae531bdcfe454d358fb46a809fc7ac40a7f8a2e4fb9aeac11800a"} Sep 29 12:15:01 crc kubenswrapper[4991]: E0929 12:15:01.908764 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d9b18c_2bf1_4e66_88b1_b0e82c5650c6.slice/crio-e946412fe28ae531bdcfe454d358fb46a809fc7ac40a7f8a2e4fb9aeac11800a.scope\": RecentStats: unable to find data in memory cache]" Sep 29 12:15:02 crc kubenswrapper[4991]: I0929 12:15:02.747105 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" event={"ID":"920ae56f-37a4-4913-a161-d22037cdce92","Type":"ContainerStarted","Data":"1b8d6b1938afa904907be2fb3ad8af842940145e534f99d51c735629cf6998a3"} Sep 29 12:15:02 crc kubenswrapper[4991]: I0929 12:15:02.800019 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" podStartSLOduration=2.79999685 podStartE2EDuration="2.79999685s" podCreationTimestamp="2025-09-29 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:15:02.769855345 +0000 UTC m=+9438.625783383" watchObservedRunningTime="2025-09-29 12:15:02.79999685 +0000 UTC m=+9438.655924878" Sep 29 12:15:03 crc kubenswrapper[4991]: I0929 12:15:03.766809 4991 generic.go:334] "Generic (PLEG): container finished" podID="e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6" containerID="e946412fe28ae531bdcfe454d358fb46a809fc7ac40a7f8a2e4fb9aeac11800a" exitCode=0 Sep 29 12:15:03 crc kubenswrapper[4991]: I0929 12:15:03.766889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx68r" event={"ID":"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6","Type":"ContainerDied","Data":"e946412fe28ae531bdcfe454d358fb46a809fc7ac40a7f8a2e4fb9aeac11800a"} Sep 29 12:15:04 crc kubenswrapper[4991]: I0929 12:15:04.780479 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx68r" event={"ID":"e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6","Type":"ContainerStarted","Data":"0a0398c250f8c0d294e1e29fa4ae58ad1791551e7fb814250fee2acea4dd5777"} Sep 29 12:15:04 crc kubenswrapper[4991]: I0929 12:15:04.782857 4991 generic.go:334] "Generic (PLEG): container finished" podID="920ae56f-37a4-4913-a161-d22037cdce92" containerID="1b8d6b1938afa904907be2fb3ad8af842940145e534f99d51c735629cf6998a3" exitCode=0 Sep 29 12:15:04 crc kubenswrapper[4991]: I0929 12:15:04.782898 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" event={"ID":"920ae56f-37a4-4913-a161-d22037cdce92","Type":"ContainerDied","Data":"1b8d6b1938afa904907be2fb3ad8af842940145e534f99d51c735629cf6998a3"} Sep 29 12:15:04 crc kubenswrapper[4991]: I0929 12:15:04.820255 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hx68r" podStartSLOduration=2.094865261 podStartE2EDuration="14.820232703s" podCreationTimestamp="2025-09-29 12:14:50 +0000 UTC" firstStartedPulling="2025-09-29 12:14:51.597089743 +0000 UTC m=+9427.453017781" lastFinishedPulling="2025-09-29 12:15:04.322457195 +0000 UTC m=+9440.178385223" observedRunningTime="2025-09-29 12:15:04.814117564 +0000 UTC m=+9440.670045592" watchObservedRunningTime="2025-09-29 12:15:04.820232703 +0000 UTC m=+9440.676160731" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.236452 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.415187 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume\") pod \"920ae56f-37a4-4913-a161-d22037cdce92\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.415589 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume\") pod \"920ae56f-37a4-4913-a161-d22037cdce92\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.415768 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfk8\" (UniqueName: \"kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8\") pod \"920ae56f-37a4-4913-a161-d22037cdce92\" (UID: \"920ae56f-37a4-4913-a161-d22037cdce92\") " Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.417653 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume" (OuterVolumeSpecName: "config-volume") pod "920ae56f-37a4-4913-a161-d22037cdce92" (UID: "920ae56f-37a4-4913-a161-d22037cdce92"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.424190 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8" (OuterVolumeSpecName: "kube-api-access-5qfk8") pod "920ae56f-37a4-4913-a161-d22037cdce92" (UID: "920ae56f-37a4-4913-a161-d22037cdce92"). InnerVolumeSpecName "kube-api-access-5qfk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.424795 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "920ae56f-37a4-4913-a161-d22037cdce92" (UID: "920ae56f-37a4-4913-a161-d22037cdce92"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.519232 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ae56f-37a4-4913-a161-d22037cdce92-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.519266 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfk8\" (UniqueName: \"kubernetes.io/projected/920ae56f-37a4-4913-a161-d22037cdce92-kube-api-access-5qfk8\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.519277 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920ae56f-37a4-4913-a161-d22037cdce92-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.818684 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" event={"ID":"920ae56f-37a4-4913-a161-d22037cdce92","Type":"ContainerDied","Data":"f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72"} Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.818755 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d9e47cacad72d3df556d25ffe0b48366d88879d6566e4b8d0ffd0df01ced72" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.818832 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319135-vkqf8" Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.960023 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr"] Sep 29 12:15:06 crc kubenswrapper[4991]: I0929 12:15:06.979340 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319090-45cxr"] Sep 29 12:15:08 crc kubenswrapper[4991]: I0929 12:15:08.938565 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d003c7-668f-4b1a-8b82-4fd52ee26974" path="/var/lib/kubelet/pods/e5d003c7-668f-4b1a-8b82-4fd52ee26974/volumes" Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.186371 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.239941 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.646552 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.646862 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.690259 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.863994 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x8b79" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="registry-server" containerID="cri-o://70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d" gracePeriod=2 Sep 29 12:15:10 crc kubenswrapper[4991]: I0929 12:15:10.920462 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hx68r" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.437501 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.636158 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content\") pod \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.636292 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hskm2\" (UniqueName: \"kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2\") pod \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.636563 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities\") pod \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\" (UID: \"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f\") " Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.637747 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities" (OuterVolumeSpecName: "utilities") pod "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" (UID: "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.650556 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" (UID: "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.660426 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2" (OuterVolumeSpecName: "kube-api-access-hskm2") pod "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" (UID: "2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f"). InnerVolumeSpecName "kube-api-access-hskm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.676399 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hx68r"] Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.740532 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.740601 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hskm2\" (UniqueName: \"kubernetes.io/projected/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-kube-api-access-hskm2\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.740627 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.879484 4991 generic.go:334] "Generic (PLEG): container finished" podID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerID="70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d" exitCode=0 Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.879546 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerDied","Data":"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d"} Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.879588 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x8b79" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.879617 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x8b79" event={"ID":"2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f","Type":"ContainerDied","Data":"9a39b8707a56e1c5d375a231c24aa5c848def5080f183979658243afdb37027c"} Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.879640 4991 scope.go:117] "RemoveContainer" containerID="70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.921583 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.923097 4991 scope.go:117] "RemoveContainer" containerID="3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677" Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.932445 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x8b79"] Sep 29 12:15:11 crc kubenswrapper[4991]: I0929 12:15:11.954816 4991 scope.go:117] "RemoveContainer" containerID="cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.005377 4991 scope.go:117] "RemoveContainer" containerID="70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d" Sep 29 12:15:12 crc kubenswrapper[4991]: E0929 12:15:12.005835 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d\": container with ID starting with 70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d not found: ID does not exist" containerID="70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.005875 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d"} err="failed to get container status \"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d\": rpc error: code = NotFound desc = could not find container \"70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d\": container with ID starting with 70b9712f93437c91eafddee330b9243ff59032749d079cf9829b6142d9d3e57d not found: ID does not exist" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.005900 4991 scope.go:117] "RemoveContainer" containerID="3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677" Sep 29 12:15:12 crc kubenswrapper[4991]: E0929 12:15:12.007032 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677\": container with ID starting with 3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677 not found: ID does not exist" containerID="3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.007089 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677"} err="failed to get container status \"3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677\": rpc error: code = NotFound desc = could not find container \"3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677\": container with ID starting with 3e735208e062219f814c4fe65076c5e7cf94c29dacedda976f6a7f985b13d677 not found: ID does not exist" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.007129 4991 scope.go:117] "RemoveContainer" containerID="cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff" Sep 29 12:15:12 crc kubenswrapper[4991]: E0929 12:15:12.007593 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff\": container with ID starting with cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff not found: ID does not exist" containerID="cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.007683 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff"} err="failed to get container status \"cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff\": rpc error: code = NotFound desc = could not find container \"cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff\": container with ID starting with cd9a2ee0f1c05d3aa672d4c85b53baaf7193901fd5b05c26e78208e4b8b65eff not found: ID does not exist" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.035430 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.035747 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w72tb" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="registry-server" containerID="cri-o://0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd" gracePeriod=2 Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.734333 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.877157 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities\") pod \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.877681 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl8zr\" (UniqueName: \"kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr\") pod \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.877811 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content\") pod \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\" (UID: \"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e\") " Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.877810 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities" (OuterVolumeSpecName: "utilities") pod "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" (UID: "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.878645 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.895008 4991 generic.go:334] "Generic (PLEG): container finished" podID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerID="0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd" exitCode=0 Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.896301 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w72tb" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.896973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerDied","Data":"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd"} Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.897004 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w72tb" event={"ID":"a5e1f548-b16d-4d2e-a8ad-0453fd76f74e","Type":"ContainerDied","Data":"99d33d4e462a08c272ad394214a1eba90eae75da95c42aaf7881a14284c4e913"} Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.897020 4991 scope.go:117] "RemoveContainer" containerID="0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.906805 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr" (OuterVolumeSpecName: "kube-api-access-pl8zr") pod "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" (UID: "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e"). InnerVolumeSpecName "kube-api-access-pl8zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.947687 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" path="/var/lib/kubelet/pods/2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f/volumes" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.968188 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" (UID: "a5e1f548-b16d-4d2e-a8ad-0453fd76f74e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.974999 4991 scope.go:117] "RemoveContainer" containerID="31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.981539 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl8zr\" (UniqueName: \"kubernetes.io/projected/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-kube-api-access-pl8zr\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:12 crc kubenswrapper[4991]: I0929 12:15:12.981576 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.004035 4991 scope.go:117] "RemoveContainer" containerID="bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.072258 4991 scope.go:117] "RemoveContainer" containerID="0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd" Sep 29 12:15:13 crc kubenswrapper[4991]: E0929 12:15:13.072826 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd\": container with ID starting with 0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd not found: ID does not exist" containerID="0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.072886 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd"} err="failed to get container status \"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd\": rpc error: code = NotFound desc = could not find container \"0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd\": container with ID starting with 0626c30aff085c3f7172ce46de79f2b346b306daac23138316e8a992a8456bcd not found: ID does not exist" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.072919 4991 scope.go:117] "RemoveContainer" containerID="31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75" Sep 29 12:15:13 crc kubenswrapper[4991]: E0929 12:15:13.074024 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75\": container with ID starting with 31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75 not found: ID does not exist" containerID="31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.074136 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75"} err="failed to get container status \"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75\": rpc error: code = NotFound desc = could not find container \"31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75\": container with ID starting with 31887cb6529fff7d28ff847bd5ba55bb485de69412a0395e8cc68fb85059de75 not found: ID does not exist" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.074218 4991 scope.go:117] "RemoveContainer" containerID="bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a" Sep 29 12:15:13 crc kubenswrapper[4991]: E0929 12:15:13.074882 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a\": container with ID starting with bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a not found: ID does not exist" containerID="bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.074941 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a"} err="failed to get container status \"bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a\": rpc error: code = NotFound desc = could not find container \"bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a\": container with ID starting with bc290ecde1332aa83b5fcf13256c0372c3f461fc31c660eceb59e8298db7251a not found: ID does not exist" Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.232620 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 12:15:13 crc kubenswrapper[4991]: I0929 12:15:13.242796 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w72tb"] Sep 29 12:15:13 crc kubenswrapper[4991]: E0929 12:15:13.456751 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5e1f548_b16d_4d2e_a8ad_0453fd76f74e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5e1f548_b16d_4d2e_a8ad_0453fd76f74e.slice/crio-99d33d4e462a08c272ad394214a1eba90eae75da95c42aaf7881a14284c4e913\": RecentStats: unable to find data in memory cache]" Sep 29 12:15:14 crc kubenswrapper[4991]: I0929 12:15:14.959380 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" path="/var/lib/kubelet/pods/a5e1f548-b16d-4d2e-a8ad-0453fd76f74e/volumes" Sep 29 12:15:27 crc kubenswrapper[4991]: I0929 12:15:27.904456 4991 scope.go:117] "RemoveContainer" containerID="e9a793874cca47cab3e634d4fc10b6e1cb364b1905997706fab412233ab30cdc" Sep 29 12:16:37 crc kubenswrapper[4991]: I0929 12:16:37.946822 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:16:37 crc kubenswrapper[4991]: I0929 12:16:37.947370 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:17:07 crc kubenswrapper[4991]: I0929 12:17:07.946615 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:17:07 crc kubenswrapper[4991]: I0929 12:17:07.947141 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.882519 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884056 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="extract-content" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884074 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="extract-content" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884105 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920ae56f-37a4-4913-a161-d22037cdce92" containerName="collect-profiles" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884115 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="920ae56f-37a4-4913-a161-d22037cdce92" containerName="collect-profiles" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884137 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="extract-utilities" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884145 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="extract-utilities" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884164 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884170 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884184 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="extract-content" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884191 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="extract-content" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884209 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884217 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: E0929 12:17:15.884229 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="extract-utilities" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884238 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="extract-utilities" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884503 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e1f548-b16d-4d2e-a8ad-0453fd76f74e" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884519 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4fc7bd-c3c5-4dfd-ab21-c81137c1525f" containerName="registry-server" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.884542 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="920ae56f-37a4-4913-a161-d22037cdce92" containerName="collect-profiles" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.886476 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:15 crc kubenswrapper[4991]: I0929 12:17:15.913884 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.042386 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.045535 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.045647 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxffx\" (UniqueName: \"kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.147537 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.147634 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.147673 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxffx\" (UniqueName: \"kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.148184 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.148471 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.188398 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxffx\" (UniqueName: \"kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx\") pod \"community-operators-l4jwd\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.208511 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:16 crc kubenswrapper[4991]: I0929 12:17:16.884115 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:17 crc kubenswrapper[4991]: I0929 12:17:17.331590 4991 generic.go:334] "Generic (PLEG): container finished" podID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerID="cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827" exitCode=0 Sep 29 12:17:17 crc kubenswrapper[4991]: I0929 12:17:17.331662 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerDied","Data":"cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827"} Sep 29 12:17:17 crc kubenswrapper[4991]: I0929 12:17:17.331943 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerStarted","Data":"a8f2b53fae27dfbb1c4df34d293635594b6a730a44e71c0bc90e0cec81addefb"} Sep 29 12:17:19 crc kubenswrapper[4991]: I0929 12:17:19.354735 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerStarted","Data":"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86"} Sep 29 12:17:20 crc kubenswrapper[4991]: I0929 12:17:20.371239 4991 generic.go:334] "Generic (PLEG): container finished" podID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerID="2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86" exitCode=0 Sep 29 12:17:20 crc kubenswrapper[4991]: I0929 12:17:20.371294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerDied","Data":"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86"} Sep 29 12:17:21 crc kubenswrapper[4991]: I0929 12:17:21.385150 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerStarted","Data":"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775"} Sep 29 12:17:21 crc kubenswrapper[4991]: I0929 12:17:21.420928 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l4jwd" podStartSLOduration=2.947052324 podStartE2EDuration="6.420906486s" podCreationTimestamp="2025-09-29 12:17:15 +0000 UTC" firstStartedPulling="2025-09-29 12:17:17.333369744 +0000 UTC m=+9573.189297772" lastFinishedPulling="2025-09-29 12:17:20.807223866 +0000 UTC m=+9576.663151934" observedRunningTime="2025-09-29 12:17:21.403459931 +0000 UTC m=+9577.259387959" watchObservedRunningTime="2025-09-29 12:17:21.420906486 +0000 UTC m=+9577.276834514" Sep 29 12:17:26 crc kubenswrapper[4991]: I0929 12:17:26.209688 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:26 crc kubenswrapper[4991]: I0929 12:17:26.210238 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:26 crc kubenswrapper[4991]: I0929 12:17:26.262304 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:26 crc kubenswrapper[4991]: I0929 12:17:26.518041 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:26 crc kubenswrapper[4991]: I0929 12:17:26.577198 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:28 crc kubenswrapper[4991]: I0929 12:17:28.490909 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l4jwd" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="registry-server" containerID="cri-o://44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775" gracePeriod=2 Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.081736 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.271987 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities\") pod \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.272067 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content\") pod \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.272116 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxffx\" (UniqueName: \"kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx\") pod \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\" (UID: \"d0180b55-7890-46ac-9c0f-f3a44b9e8075\") " Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.273102 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities" (OuterVolumeSpecName: "utilities") pod "d0180b55-7890-46ac-9c0f-f3a44b9e8075" (UID: "d0180b55-7890-46ac-9c0f-f3a44b9e8075"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.278468 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx" (OuterVolumeSpecName: "kube-api-access-qxffx") pod "d0180b55-7890-46ac-9c0f-f3a44b9e8075" (UID: "d0180b55-7890-46ac-9c0f-f3a44b9e8075"). InnerVolumeSpecName "kube-api-access-qxffx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.323434 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0180b55-7890-46ac-9c0f-f3a44b9e8075" (UID: "d0180b55-7890-46ac-9c0f-f3a44b9e8075"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.374950 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.375255 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0180b55-7890-46ac-9c0f-f3a44b9e8075-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.375268 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxffx\" (UniqueName: \"kubernetes.io/projected/d0180b55-7890-46ac-9c0f-f3a44b9e8075-kube-api-access-qxffx\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.503940 4991 generic.go:334] "Generic (PLEG): container finished" podID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerID="44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775" exitCode=0 Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.504038 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerDied","Data":"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775"} Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.504048 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4jwd" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.504069 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4jwd" event={"ID":"d0180b55-7890-46ac-9c0f-f3a44b9e8075","Type":"ContainerDied","Data":"a8f2b53fae27dfbb1c4df34d293635594b6a730a44e71c0bc90e0cec81addefb"} Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.504091 4991 scope.go:117] "RemoveContainer" containerID="44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.535277 4991 scope.go:117] "RemoveContainer" containerID="2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.543154 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.553651 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l4jwd"] Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.585745 4991 scope.go:117] "RemoveContainer" containerID="cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.619827 4991 scope.go:117] "RemoveContainer" containerID="44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775" Sep 29 12:17:29 crc kubenswrapper[4991]: E0929 12:17:29.620432 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775\": container with ID starting with 44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775 not found: ID does not exist" containerID="44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.620472 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775"} err="failed to get container status \"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775\": rpc error: code = NotFound desc = could not find container \"44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775\": container with ID starting with 44fc583c131bbc1f3e5946d7a0befc3b13a28322aa98b59d82262c48e6b58775 not found: ID does not exist" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.620497 4991 scope.go:117] "RemoveContainer" containerID="2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86" Sep 29 12:17:29 crc kubenswrapper[4991]: E0929 12:17:29.620888 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86\": container with ID starting with 2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86 not found: ID does not exist" containerID="2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.620908 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86"} err="failed to get container status \"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86\": rpc error: code = NotFound desc = could not find container \"2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86\": container with ID starting with 2efacceaede55e151e79ddbfbbc986e4c4a7905cbc52687f1085c0f572adac86 not found: ID does not exist" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.620920 4991 scope.go:117] "RemoveContainer" containerID="cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827" Sep 29 12:17:29 crc kubenswrapper[4991]: E0929 12:17:29.621236 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827\": container with ID starting with cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827 not found: ID does not exist" containerID="cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827" Sep 29 12:17:29 crc kubenswrapper[4991]: I0929 12:17:29.621273 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827"} err="failed to get container status \"cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827\": rpc error: code = NotFound desc = could not find container \"cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827\": container with ID starting with cf0d54e1b7ee9f7e6fc64ffcc0c6162f7175fc3f586f078ac316a5ac98c23827 not found: ID does not exist" Sep 29 12:17:30 crc kubenswrapper[4991]: I0929 12:17:30.959103 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" path="/var/lib/kubelet/pods/d0180b55-7890-46ac-9c0f-f3a44b9e8075/volumes" Sep 29 12:17:37 crc kubenswrapper[4991]: I0929 12:17:37.947323 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:17:37 crc kubenswrapper[4991]: I0929 12:17:37.948101 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:17:37 crc kubenswrapper[4991]: I0929 12:17:37.948179 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:17:37 crc kubenswrapper[4991]: I0929 12:17:37.949695 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:17:37 crc kubenswrapper[4991]: I0929 12:17:37.949867 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" gracePeriod=600 Sep 29 12:17:38 crc kubenswrapper[4991]: E0929 12:17:38.100336 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:17:38 crc kubenswrapper[4991]: I0929 12:17:38.606918 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" exitCode=0 Sep 29 12:17:38 crc kubenswrapper[4991]: I0929 12:17:38.607045 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647"} Sep 29 12:17:38 crc kubenswrapper[4991]: I0929 12:17:38.607267 4991 scope.go:117] "RemoveContainer" containerID="7ba9b4afd115d123b9daa76f8feb8417990d516cdbf965e66553d0c134c00467" Sep 29 12:17:38 crc kubenswrapper[4991]: I0929 12:17:38.608531 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:17:38 crc kubenswrapper[4991]: E0929 12:17:38.609227 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.405386 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:39 crc kubenswrapper[4991]: E0929 12:17:39.406031 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="registry-server" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.406049 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="registry-server" Sep 29 12:17:39 crc kubenswrapper[4991]: E0929 12:17:39.406074 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="extract-content" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.406082 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="extract-content" Sep 29 12:17:39 crc kubenswrapper[4991]: E0929 12:17:39.406114 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="extract-utilities" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.406122 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="extract-utilities" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.406411 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0180b55-7890-46ac-9c0f-f3a44b9e8075" containerName="registry-server" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.410719 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.441422 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.527009 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.527445 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr5j\" (UniqueName: \"kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.527785 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.630468 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.630566 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.630638 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgr5j\" (UniqueName: \"kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.630999 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.631417 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.661199 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgr5j\" (UniqueName: \"kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j\") pod \"certified-operators-wq9m8\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:39 crc kubenswrapper[4991]: I0929 12:17:39.781250 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:40 crc kubenswrapper[4991]: I0929 12:17:40.353287 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:40 crc kubenswrapper[4991]: I0929 12:17:40.636411 4991 generic.go:334] "Generic (PLEG): container finished" podID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerID="d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f" exitCode=0 Sep 29 12:17:40 crc kubenswrapper[4991]: I0929 12:17:40.636495 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerDied","Data":"d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f"} Sep 29 12:17:40 crc kubenswrapper[4991]: I0929 12:17:40.636716 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerStarted","Data":"e3a2e1a0c9f45eca58862ae2de004b87ed0fcc2fc5168757b4a40de028aa323c"} Sep 29 12:17:42 crc kubenswrapper[4991]: I0929 12:17:42.671546 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerStarted","Data":"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823"} Sep 29 12:17:43 crc kubenswrapper[4991]: I0929 12:17:43.681639 4991 generic.go:334] "Generic (PLEG): container finished" podID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerID="5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823" exitCode=0 Sep 29 12:17:43 crc kubenswrapper[4991]: I0929 12:17:43.681693 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerDied","Data":"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823"} Sep 29 12:17:45 crc kubenswrapper[4991]: I0929 12:17:45.711889 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerStarted","Data":"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089"} Sep 29 12:17:45 crc kubenswrapper[4991]: I0929 12:17:45.749810 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wq9m8" podStartSLOduration=2.894567915 podStartE2EDuration="6.74978901s" podCreationTimestamp="2025-09-29 12:17:39 +0000 UTC" firstStartedPulling="2025-09-29 12:17:40.638071625 +0000 UTC m=+9596.493999653" lastFinishedPulling="2025-09-29 12:17:44.49329272 +0000 UTC m=+9600.349220748" observedRunningTime="2025-09-29 12:17:45.743593419 +0000 UTC m=+9601.599521457" watchObservedRunningTime="2025-09-29 12:17:45.74978901 +0000 UTC m=+9601.605717038" Sep 29 12:17:49 crc kubenswrapper[4991]: I0929 12:17:49.782484 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:49 crc kubenswrapper[4991]: I0929 12:17:49.783923 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:49 crc kubenswrapper[4991]: I0929 12:17:49.857704 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:50 crc kubenswrapper[4991]: I0929 12:17:50.855272 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:50 crc kubenswrapper[4991]: I0929 12:17:50.917968 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:50 crc kubenswrapper[4991]: I0929 12:17:50.926331 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:17:50 crc kubenswrapper[4991]: E0929 12:17:50.926844 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:17:52 crc kubenswrapper[4991]: I0929 12:17:52.803943 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wq9m8" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="registry-server" containerID="cri-o://8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089" gracePeriod=2 Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.321217 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.460244 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities\") pod \"481ed99c-4c4e-40ec-82e0-985d7c0af580\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.460347 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgr5j\" (UniqueName: \"kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j\") pod \"481ed99c-4c4e-40ec-82e0-985d7c0af580\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.460417 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content\") pod \"481ed99c-4c4e-40ec-82e0-985d7c0af580\" (UID: \"481ed99c-4c4e-40ec-82e0-985d7c0af580\") " Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.461521 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities" (OuterVolumeSpecName: "utilities") pod "481ed99c-4c4e-40ec-82e0-985d7c0af580" (UID: "481ed99c-4c4e-40ec-82e0-985d7c0af580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.466319 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j" (OuterVolumeSpecName: "kube-api-access-xgr5j") pod "481ed99c-4c4e-40ec-82e0-985d7c0af580" (UID: "481ed99c-4c4e-40ec-82e0-985d7c0af580"). InnerVolumeSpecName "kube-api-access-xgr5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.507902 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481ed99c-4c4e-40ec-82e0-985d7c0af580" (UID: "481ed99c-4c4e-40ec-82e0-985d7c0af580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.563638 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.563701 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgr5j\" (UniqueName: \"kubernetes.io/projected/481ed99c-4c4e-40ec-82e0-985d7c0af580-kube-api-access-xgr5j\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.563717 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481ed99c-4c4e-40ec-82e0-985d7c0af580-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.815878 4991 generic.go:334] "Generic (PLEG): container finished" podID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerID="8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089" exitCode=0 Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.815926 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerDied","Data":"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089"} Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.815973 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wq9m8" event={"ID":"481ed99c-4c4e-40ec-82e0-985d7c0af580","Type":"ContainerDied","Data":"e3a2e1a0c9f45eca58862ae2de004b87ed0fcc2fc5168757b4a40de028aa323c"} Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.815995 4991 scope.go:117] "RemoveContainer" containerID="8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.817799 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wq9m8" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.854713 4991 scope.go:117] "RemoveContainer" containerID="5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.879697 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.890657 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wq9m8"] Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.896574 4991 scope.go:117] "RemoveContainer" containerID="d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.949263 4991 scope.go:117] "RemoveContainer" containerID="8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089" Sep 29 12:17:53 crc kubenswrapper[4991]: E0929 12:17:53.949741 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089\": container with ID starting with 8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089 not found: ID does not exist" containerID="8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.949794 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089"} err="failed to get container status \"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089\": rpc error: code = NotFound desc = could not find container \"8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089\": container with ID starting with 8afcd636161dbf38645f2dcb99b7a10e0c3e9ddc92feb740d41ac5445ba41089 not found: ID does not exist" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.949820 4991 scope.go:117] "RemoveContainer" containerID="5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823" Sep 29 12:17:53 crc kubenswrapper[4991]: E0929 12:17:53.950122 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823\": container with ID starting with 5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823 not found: ID does not exist" containerID="5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.950146 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823"} err="failed to get container status \"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823\": rpc error: code = NotFound desc = could not find container \"5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823\": container with ID starting with 5067762eda6fa61ba5e6fe08438556cc263a70eb1630b7d3a3971fa0c6997823 not found: ID does not exist" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.950164 4991 scope.go:117] "RemoveContainer" containerID="d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f" Sep 29 12:17:53 crc kubenswrapper[4991]: E0929 12:17:53.950527 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f\": container with ID starting with d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f not found: ID does not exist" containerID="d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f" Sep 29 12:17:53 crc kubenswrapper[4991]: I0929 12:17:53.950559 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f"} err="failed to get container status \"d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f\": rpc error: code = NotFound desc = could not find container \"d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f\": container with ID starting with d691c41022fd2dde3eae03404d141fcb233be346346c2fb962fe76a73157aa4f not found: ID does not exist" Sep 29 12:17:54 crc kubenswrapper[4991]: I0929 12:17:54.947146 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" path="/var/lib/kubelet/pods/481ed99c-4c4e-40ec-82e0-985d7c0af580/volumes" Sep 29 12:18:02 crc kubenswrapper[4991]: I0929 12:18:02.926102 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:18:02 crc kubenswrapper[4991]: E0929 12:18:02.926930 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:18:15 crc kubenswrapper[4991]: I0929 12:18:15.927366 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:18:15 crc kubenswrapper[4991]: E0929 12:18:15.928223 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:18:26 crc kubenswrapper[4991]: I0929 12:18:26.926741 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:18:26 crc kubenswrapper[4991]: E0929 12:18:26.927517 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:18:40 crc kubenswrapper[4991]: I0929 12:18:40.926158 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:18:40 crc kubenswrapper[4991]: E0929 12:18:40.927209 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:18:51 crc kubenswrapper[4991]: I0929 12:18:51.927641 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:18:51 crc kubenswrapper[4991]: E0929 12:18:51.928476 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:19:04 crc kubenswrapper[4991]: I0929 12:19:04.937599 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:19:04 crc kubenswrapper[4991]: E0929 12:19:04.938819 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:19:16 crc kubenswrapper[4991]: I0929 12:19:16.930325 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:19:16 crc kubenswrapper[4991]: E0929 12:19:16.932086 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:19:31 crc kubenswrapper[4991]: I0929 12:19:31.927783 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:19:31 crc kubenswrapper[4991]: E0929 12:19:31.928514 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:19:43 crc kubenswrapper[4991]: I0929 12:19:43.925755 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:19:43 crc kubenswrapper[4991]: E0929 12:19:43.926512 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:19:54 crc kubenswrapper[4991]: I0929 12:19:54.939852 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:19:54 crc kubenswrapper[4991]: E0929 12:19:54.940785 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:20:06 crc kubenswrapper[4991]: I0929 12:20:06.927100 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:20:06 crc kubenswrapper[4991]: E0929 12:20:06.927939 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:20:18 crc kubenswrapper[4991]: I0929 12:20:18.927241 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:20:18 crc kubenswrapper[4991]: E0929 12:20:18.928183 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:20:33 crc kubenswrapper[4991]: I0929 12:20:33.926504 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:20:33 crc kubenswrapper[4991]: E0929 12:20:33.927232 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:20:44 crc kubenswrapper[4991]: I0929 12:20:44.937892 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:20:44 crc kubenswrapper[4991]: E0929 12:20:44.938747 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:20:57 crc kubenswrapper[4991]: I0929 12:20:57.926457 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:20:57 crc kubenswrapper[4991]: E0929 12:20:57.928424 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:21:09 crc kubenswrapper[4991]: I0929 12:21:09.927158 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:21:09 crc kubenswrapper[4991]: E0929 12:21:09.928099 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:21:21 crc kubenswrapper[4991]: I0929 12:21:21.926848 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:21:21 crc kubenswrapper[4991]: E0929 12:21:21.927662 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:21:36 crc kubenswrapper[4991]: I0929 12:21:36.926862 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:21:36 crc kubenswrapper[4991]: E0929 12:21:36.928070 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:21:48 crc kubenswrapper[4991]: I0929 12:21:48.926935 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:21:48 crc kubenswrapper[4991]: E0929 12:21:48.929973 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:22:02 crc kubenswrapper[4991]: I0929 12:22:02.926893 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:22:02 crc kubenswrapper[4991]: E0929 12:22:02.928714 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:22:16 crc kubenswrapper[4991]: I0929 12:22:16.927106 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:22:16 crc kubenswrapper[4991]: E0929 12:22:16.928090 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:22:28 crc kubenswrapper[4991]: I0929 12:22:28.927558 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:22:28 crc kubenswrapper[4991]: E0929 12:22:28.928483 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:22:43 crc kubenswrapper[4991]: I0929 12:22:43.926754 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:22:45 crc kubenswrapper[4991]: I0929 12:22:45.064695 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5"} Sep 29 12:25:07 crc kubenswrapper[4991]: I0929 12:25:07.947010 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:25:07 crc kubenswrapper[4991]: I0929 12:25:07.947592 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:25:37 crc kubenswrapper[4991]: I0929 12:25:37.947114 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:25:37 crc kubenswrapper[4991]: I0929 12:25:37.947602 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:26:07 crc kubenswrapper[4991]: I0929 12:26:07.946843 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:26:07 crc kubenswrapper[4991]: I0929 12:26:07.947364 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:26:07 crc kubenswrapper[4991]: I0929 12:26:07.947411 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:26:07 crc kubenswrapper[4991]: I0929 12:26:07.948279 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:26:07 crc kubenswrapper[4991]: I0929 12:26:07.948346 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5" gracePeriod=600 Sep 29 12:26:08 crc kubenswrapper[4991]: I0929 12:26:08.363653 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5" exitCode=0 Sep 29 12:26:08 crc kubenswrapper[4991]: I0929 12:26:08.363740 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5"} Sep 29 12:26:08 crc kubenswrapper[4991]: I0929 12:26:08.364015 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a"} Sep 29 12:26:08 crc kubenswrapper[4991]: I0929 12:26:08.364038 4991 scope.go:117] "RemoveContainer" containerID="06b8165dcf7b870c45a9d940b687e5f26b6ef9be8d5e2afc997074f33c318647" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.499607 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:21 crc kubenswrapper[4991]: E0929 12:27:21.500862 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="extract-content" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.500884 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="extract-content" Sep 29 12:27:21 crc kubenswrapper[4991]: E0929 12:27:21.500937 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="extract-utilities" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.500969 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="extract-utilities" Sep 29 12:27:21 crc kubenswrapper[4991]: E0929 12:27:21.500997 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="registry-server" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.501006 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="registry-server" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.501295 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="481ed99c-4c4e-40ec-82e0-985d7c0af580" containerName="registry-server" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.503654 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.513576 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.662475 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj5d\" (UniqueName: \"kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.662581 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.662673 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.765290 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj5d\" (UniqueName: \"kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.765376 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.765441 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.766207 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:21 crc kubenswrapper[4991]: I0929 12:27:21.766212 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:22 crc kubenswrapper[4991]: I0929 12:27:22.448580 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj5d\" (UniqueName: \"kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d\") pod \"community-operators-rz7gj\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:22 crc kubenswrapper[4991]: I0929 12:27:22.738649 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:23 crc kubenswrapper[4991]: I0929 12:27:23.221414 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:24 crc kubenswrapper[4991]: I0929 12:27:24.171119 4991 generic.go:334] "Generic (PLEG): container finished" podID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerID="2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b" exitCode=0 Sep 29 12:27:24 crc kubenswrapper[4991]: I0929 12:27:24.171223 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerDied","Data":"2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b"} Sep 29 12:27:24 crc kubenswrapper[4991]: I0929 12:27:24.171394 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerStarted","Data":"bbe6460ccbcb5e7a50cd918e3bf27dab8212476a062e820413f61d1c2b2a0c7c"} Sep 29 12:27:24 crc kubenswrapper[4991]: I0929 12:27:24.174105 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 12:27:25 crc kubenswrapper[4991]: I0929 12:27:25.187234 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerStarted","Data":"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea"} Sep 29 12:27:26 crc kubenswrapper[4991]: I0929 12:27:26.199796 4991 generic.go:334] "Generic (PLEG): container finished" podID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerID="9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea" exitCode=0 Sep 29 12:27:26 crc kubenswrapper[4991]: I0929 12:27:26.199821 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerDied","Data":"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea"} Sep 29 12:27:27 crc kubenswrapper[4991]: I0929 12:27:27.214401 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerStarted","Data":"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032"} Sep 29 12:27:27 crc kubenswrapper[4991]: I0929 12:27:27.245108 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rz7gj" podStartSLOduration=3.817664068 podStartE2EDuration="6.24508544s" podCreationTimestamp="2025-09-29 12:27:21 +0000 UTC" firstStartedPulling="2025-09-29 12:27:24.173874366 +0000 UTC m=+10180.029802394" lastFinishedPulling="2025-09-29 12:27:26.601295738 +0000 UTC m=+10182.457223766" observedRunningTime="2025-09-29 12:27:27.233093647 +0000 UTC m=+10183.089021685" watchObservedRunningTime="2025-09-29 12:27:27.24508544 +0000 UTC m=+10183.101013468" Sep 29 12:27:32 crc kubenswrapper[4991]: I0929 12:27:32.739075 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:32 crc kubenswrapper[4991]: I0929 12:27:32.739676 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:32 crc kubenswrapper[4991]: I0929 12:27:32.798466 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:33 crc kubenswrapper[4991]: I0929 12:27:33.323394 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:33 crc kubenswrapper[4991]: I0929 12:27:33.378035 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.295139 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rz7gj" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="registry-server" containerID="cri-o://c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032" gracePeriod=2 Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.797220 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.913741 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities\") pod \"eb09b866-3fb7-49e0-9adb-fa45e8550639\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.913797 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content\") pod \"eb09b866-3fb7-49e0-9adb-fa45e8550639\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.914080 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnj5d\" (UniqueName: \"kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d\") pod \"eb09b866-3fb7-49e0-9adb-fa45e8550639\" (UID: \"eb09b866-3fb7-49e0-9adb-fa45e8550639\") " Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.916821 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities" (OuterVolumeSpecName: "utilities") pod "eb09b866-3fb7-49e0-9adb-fa45e8550639" (UID: "eb09b866-3fb7-49e0-9adb-fa45e8550639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.921682 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d" (OuterVolumeSpecName: "kube-api-access-rnj5d") pod "eb09b866-3fb7-49e0-9adb-fa45e8550639" (UID: "eb09b866-3fb7-49e0-9adb-fa45e8550639"). InnerVolumeSpecName "kube-api-access-rnj5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:27:35 crc kubenswrapper[4991]: I0929 12:27:35.966036 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb09b866-3fb7-49e0-9adb-fa45e8550639" (UID: "eb09b866-3fb7-49e0-9adb-fa45e8550639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.018230 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnj5d\" (UniqueName: \"kubernetes.io/projected/eb09b866-3fb7-49e0-9adb-fa45e8550639-kube-api-access-rnj5d\") on node \"crc\" DevicePath \"\"" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.018268 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.018281 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb09b866-3fb7-49e0-9adb-fa45e8550639-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.309919 4991 generic.go:334] "Generic (PLEG): container finished" podID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerID="c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032" exitCode=0 Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.309984 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerDied","Data":"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032"} Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.310020 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rz7gj" event={"ID":"eb09b866-3fb7-49e0-9adb-fa45e8550639","Type":"ContainerDied","Data":"bbe6460ccbcb5e7a50cd918e3bf27dab8212476a062e820413f61d1c2b2a0c7c"} Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.310025 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rz7gj" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.310042 4991 scope.go:117] "RemoveContainer" containerID="c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.333485 4991 scope.go:117] "RemoveContainer" containerID="9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.363898 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.371518 4991 scope.go:117] "RemoveContainer" containerID="2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.384579 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rz7gj"] Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.413381 4991 scope.go:117] "RemoveContainer" containerID="c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032" Sep 29 12:27:36 crc kubenswrapper[4991]: E0929 12:27:36.413919 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032\": container with ID starting with c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032 not found: ID does not exist" containerID="c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.414279 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032"} err="failed to get container status \"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032\": rpc error: code = NotFound desc = could not find container \"c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032\": container with ID starting with c7ccc8f3e813d279a0b012423223da2867890375aead4f0b925c34881b939032 not found: ID does not exist" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.414308 4991 scope.go:117] "RemoveContainer" containerID="9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea" Sep 29 12:27:36 crc kubenswrapper[4991]: E0929 12:27:36.414687 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea\": container with ID starting with 9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea not found: ID does not exist" containerID="9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.414715 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea"} err="failed to get container status \"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea\": rpc error: code = NotFound desc = could not find container \"9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea\": container with ID starting with 9fb27b3a5a60f7e41eecbb1dcf6f54e05efab4fefe6c3118ef84110e638d95ea not found: ID does not exist" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.414732 4991 scope.go:117] "RemoveContainer" containerID="2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b" Sep 29 12:27:36 crc kubenswrapper[4991]: E0929 12:27:36.415095 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b\": container with ID starting with 2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b not found: ID does not exist" containerID="2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.415122 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b"} err="failed to get container status \"2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b\": rpc error: code = NotFound desc = could not find container \"2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b\": container with ID starting with 2ec1a7a7d63adca71a73874f834c9ac75f238814c10486b2e9220ade888a031b not found: ID does not exist" Sep 29 12:27:36 crc kubenswrapper[4991]: I0929 12:27:36.942619 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" path="/var/lib/kubelet/pods/eb09b866-3fb7-49e0-9adb-fa45e8550639/volumes" Sep 29 12:28:37 crc kubenswrapper[4991]: I0929 12:28:37.947639 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:28:37 crc kubenswrapper[4991]: I0929 12:28:37.948227 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.332486 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:28:41 crc kubenswrapper[4991]: E0929 12:28:41.334283 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="registry-server" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.334309 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="registry-server" Sep 29 12:28:41 crc kubenswrapper[4991]: E0929 12:28:41.334343 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="extract-utilities" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.334357 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="extract-utilities" Sep 29 12:28:41 crc kubenswrapper[4991]: E0929 12:28:41.334390 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="extract-content" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.334401 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="extract-content" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.334855 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb09b866-3fb7-49e0-9adb-fa45e8550639" containerName="registry-server" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.341627 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.367021 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.491468 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.491656 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rc7\" (UniqueName: \"kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.491718 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.594433 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6rc7\" (UniqueName: \"kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.594540 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.594631 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.595195 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.595292 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.633035 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6rc7\" (UniqueName: \"kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7\") pod \"certified-operators-7r85s\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:41 crc kubenswrapper[4991]: I0929 12:28:41.676996 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:42 crc kubenswrapper[4991]: I0929 12:28:42.231315 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:28:43 crc kubenswrapper[4991]: I0929 12:28:43.121476 4991 generic.go:334] "Generic (PLEG): container finished" podID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerID="ecc8c4670aae74c3785d679f7ddb6d0f9f2cea8fdc576b564ca132ee04d1c67a" exitCode=0 Sep 29 12:28:43 crc kubenswrapper[4991]: I0929 12:28:43.121583 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerDied","Data":"ecc8c4670aae74c3785d679f7ddb6d0f9f2cea8fdc576b564ca132ee04d1c67a"} Sep 29 12:28:43 crc kubenswrapper[4991]: I0929 12:28:43.122019 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerStarted","Data":"2b54245e5fdcef6931f8925b07607aa285bb7fe06749ae16cc18f0bf32f98a3b"} Sep 29 12:28:45 crc kubenswrapper[4991]: I0929 12:28:45.146861 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerStarted","Data":"4f9ba3c520fc9b331c2df5fd6b9ae7b6ce108e715286654ff9bb0af7048564cb"} Sep 29 12:28:46 crc kubenswrapper[4991]: I0929 12:28:46.159867 4991 generic.go:334] "Generic (PLEG): container finished" podID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerID="4f9ba3c520fc9b331c2df5fd6b9ae7b6ce108e715286654ff9bb0af7048564cb" exitCode=0 Sep 29 12:28:46 crc kubenswrapper[4991]: I0929 12:28:46.159913 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerDied","Data":"4f9ba3c520fc9b331c2df5fd6b9ae7b6ce108e715286654ff9bb0af7048564cb"} Sep 29 12:28:48 crc kubenswrapper[4991]: I0929 12:28:48.188293 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerStarted","Data":"b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b"} Sep 29 12:28:48 crc kubenswrapper[4991]: I0929 12:28:48.213854 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7r85s" podStartSLOduration=2.64096112 podStartE2EDuration="7.21383049s" podCreationTimestamp="2025-09-29 12:28:41 +0000 UTC" firstStartedPulling="2025-09-29 12:28:43.123687218 +0000 UTC m=+10258.979615246" lastFinishedPulling="2025-09-29 12:28:47.696556588 +0000 UTC m=+10263.552484616" observedRunningTime="2025-09-29 12:28:48.213429709 +0000 UTC m=+10264.069357747" watchObservedRunningTime="2025-09-29 12:28:48.21383049 +0000 UTC m=+10264.069758538" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.119771 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.125075 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.144206 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.225867 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.226258 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.226296 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz69\" (UniqueName: \"kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.331681 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.331750 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.331782 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz69\" (UniqueName: \"kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.332858 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.333444 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.370655 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz69\" (UniqueName: \"kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69\") pod \"redhat-operators-ws9hg\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.463668 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.678250 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.682802 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.749685 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:51 crc kubenswrapper[4991]: W0929 12:28:51.982967 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5bbced_2568_4bd0_a2b7_a3b783a933e4.slice/crio-0f1783ecddfd1fe2bf3332e45a192858225ef39084d99efd65c865a8d64a6e21 WatchSource:0}: Error finding container 0f1783ecddfd1fe2bf3332e45a192858225ef39084d99efd65c865a8d64a6e21: Status 404 returned error can't find the container with id 0f1783ecddfd1fe2bf3332e45a192858225ef39084d99efd65c865a8d64a6e21 Sep 29 12:28:51 crc kubenswrapper[4991]: I0929 12:28:51.993574 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:28:52 crc kubenswrapper[4991]: I0929 12:28:52.229980 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerStarted","Data":"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf"} Sep 29 12:28:52 crc kubenswrapper[4991]: I0929 12:28:52.230350 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerStarted","Data":"0f1783ecddfd1fe2bf3332e45a192858225ef39084d99efd65c865a8d64a6e21"} Sep 29 12:28:53 crc kubenswrapper[4991]: I0929 12:28:53.241927 4991 generic.go:334] "Generic (PLEG): container finished" podID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerID="c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf" exitCode=0 Sep 29 12:28:53 crc kubenswrapper[4991]: I0929 12:28:53.242066 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerDied","Data":"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf"} Sep 29 12:28:53 crc kubenswrapper[4991]: I0929 12:28:53.308092 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.277435 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerStarted","Data":"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc"} Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.910258 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.914514 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.942404 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.969420 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.969771 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8w5\" (UniqueName: \"kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:56 crc kubenswrapper[4991]: I0929 12:28:56.969965 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.073014 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8w5\" (UniqueName: \"kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.073394 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.073569 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.074186 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.074260 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.101591 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8w5\" (UniqueName: \"kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5\") pod \"redhat-marketplace-bnrk7\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.243042 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:28:57 crc kubenswrapper[4991]: I0929 12:28:57.890141 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:29:00 crc kubenswrapper[4991]: E0929 12:29:00.798214 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa5bbced_2568_4bd0_a2b7_a3b783a933e4.slice/crio-conmon-0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc.scope\": RecentStats: unable to find data in memory cache]" Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.330003 4991 generic.go:334] "Generic (PLEG): container finished" podID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerID="dbc5659432c2a279b674c48052492ce091aacca24b109010e051ca294d4aa5c1" exitCode=0 Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.330108 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerDied","Data":"dbc5659432c2a279b674c48052492ce091aacca24b109010e051ca294d4aa5c1"} Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.330367 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerStarted","Data":"fd613288a78d8861eb8936ed8920116b12cacb5f7b159266bada621854bea488"} Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.334716 4991 generic.go:334] "Generic (PLEG): container finished" podID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerID="0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc" exitCode=0 Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.334768 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerDied","Data":"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc"} Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.696801 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.697316 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7r85s" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" containerID="cri-o://b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" gracePeriod=2 Sep 29 12:29:01 crc kubenswrapper[4991]: E0929 12:29:01.722345 4991 log.go:32] "ExecSync cmd from runtime service failed" err=< Sep 29 12:29:01 crc kubenswrapper[4991]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Sep 29 12:29:01 crc kubenswrapper[4991]: fail startup Sep 29 12:29:01 crc kubenswrapper[4991]: , stdout: , stderr: , exit code -1 Sep 29 12:29:01 crc kubenswrapper[4991]: > containerID="b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" cmd=["grpc_health_probe","-addr=:50051"] Sep 29 12:29:01 crc kubenswrapper[4991]: E0929 12:29:01.722875 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b is running failed: container process not found" containerID="b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" cmd=["grpc_health_probe","-addr=:50051"] Sep 29 12:29:01 crc kubenswrapper[4991]: E0929 12:29:01.723263 4991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b is running failed: container process not found" containerID="b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" cmd=["grpc_health_probe","-addr=:50051"] Sep 29 12:29:01 crc kubenswrapper[4991]: E0929 12:29:01.723301 4991 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-7r85s" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" Sep 29 12:29:01 crc kubenswrapper[4991]: I0929 12:29:01.767740 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7r85s" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" probeResult="failure" output="" Sep 29 12:29:02 crc kubenswrapper[4991]: I0929 12:29:02.348784 4991 generic.go:334] "Generic (PLEG): container finished" podID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerID="b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" exitCode=0 Sep 29 12:29:02 crc kubenswrapper[4991]: I0929 12:29:02.349170 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerDied","Data":"b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b"} Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.010484 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.119384 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content\") pod \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.119587 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6rc7\" (UniqueName: \"kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7\") pod \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.119674 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities\") pod \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\" (UID: \"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157\") " Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.121198 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities" (OuterVolumeSpecName: "utilities") pod "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" (UID: "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.128080 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7" (OuterVolumeSpecName: "kube-api-access-w6rc7") pod "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" (UID: "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157"). InnerVolumeSpecName "kube-api-access-w6rc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.184890 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" (UID: "3b8deaa8-f4e3-4dd2-b46e-26443f5a9157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.223149 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6rc7\" (UniqueName: \"kubernetes.io/projected/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-kube-api-access-w6rc7\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.223197 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.223212 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.362184 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7r85s" event={"ID":"3b8deaa8-f4e3-4dd2-b46e-26443f5a9157","Type":"ContainerDied","Data":"2b54245e5fdcef6931f8925b07607aa285bb7fe06749ae16cc18f0bf32f98a3b"} Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.362246 4991 scope.go:117] "RemoveContainer" containerID="b6ed13481465a4ea4659aac4e705a005b9f3e2e9f3bb707fefded3d18692ba4b" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.362272 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7r85s" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.367180 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerStarted","Data":"a7288c59d93d55081a4283793c6092a6a799f33272ce8af617d9235fecffb9e7"} Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.384337 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerStarted","Data":"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323"} Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.400688 4991 scope.go:117] "RemoveContainer" containerID="4f9ba3c520fc9b331c2df5fd6b9ae7b6ce108e715286654ff9bb0af7048564cb" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.430145 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.431431 4991 scope.go:117] "RemoveContainer" containerID="ecc8c4670aae74c3785d679f7ddb6d0f9f2cea8fdc576b564ca132ee04d1c67a" Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.446107 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7r85s"] Sep 29 12:29:03 crc kubenswrapper[4991]: I0929 12:29:03.458393 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ws9hg" podStartSLOduration=2.869557042 podStartE2EDuration="12.4583749s" podCreationTimestamp="2025-09-29 12:28:51 +0000 UTC" firstStartedPulling="2025-09-29 12:28:53.244335416 +0000 UTC m=+10269.100263444" lastFinishedPulling="2025-09-29 12:29:02.833153284 +0000 UTC m=+10278.689081302" observedRunningTime="2025-09-29 12:29:03.418808298 +0000 UTC m=+10279.274736346" watchObservedRunningTime="2025-09-29 12:29:03.4583749 +0000 UTC m=+10279.314302918" Sep 29 12:29:04 crc kubenswrapper[4991]: I0929 12:29:04.396814 4991 generic.go:334] "Generic (PLEG): container finished" podID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerID="a7288c59d93d55081a4283793c6092a6a799f33272ce8af617d9235fecffb9e7" exitCode=0 Sep 29 12:29:04 crc kubenswrapper[4991]: I0929 12:29:04.396893 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerDied","Data":"a7288c59d93d55081a4283793c6092a6a799f33272ce8af617d9235fecffb9e7"} Sep 29 12:29:04 crc kubenswrapper[4991]: I0929 12:29:04.941863 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" path="/var/lib/kubelet/pods/3b8deaa8-f4e3-4dd2-b46e-26443f5a9157/volumes" Sep 29 12:29:06 crc kubenswrapper[4991]: I0929 12:29:06.423201 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerStarted","Data":"49e3d4a2b07b955f9cb46ccabaaa709f26c56dea8b8807bbd464278a7ba8f886"} Sep 29 12:29:06 crc kubenswrapper[4991]: I0929 12:29:06.442178 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bnrk7" podStartSLOduration=6.482438936 podStartE2EDuration="10.442152174s" podCreationTimestamp="2025-09-29 12:28:56 +0000 UTC" firstStartedPulling="2025-09-29 12:29:01.334368212 +0000 UTC m=+10277.190296240" lastFinishedPulling="2025-09-29 12:29:05.29408146 +0000 UTC m=+10281.150009478" observedRunningTime="2025-09-29 12:29:06.437477572 +0000 UTC m=+10282.293405610" watchObservedRunningTime="2025-09-29 12:29:06.442152174 +0000 UTC m=+10282.298080202" Sep 29 12:29:07 crc kubenswrapper[4991]: I0929 12:29:07.243223 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:07 crc kubenswrapper[4991]: I0929 12:29:07.243288 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:07 crc kubenswrapper[4991]: I0929 12:29:07.300508 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:07 crc kubenswrapper[4991]: I0929 12:29:07.947621 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:29:07 crc kubenswrapper[4991]: I0929 12:29:07.948133 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:29:11 crc kubenswrapper[4991]: I0929 12:29:11.464727 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:11 crc kubenswrapper[4991]: I0929 12:29:11.465044 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:12 crc kubenswrapper[4991]: I0929 12:29:12.517090 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ws9hg" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" probeResult="failure" output=< Sep 29 12:29:12 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 12:29:12 crc kubenswrapper[4991]: > Sep 29 12:29:17 crc kubenswrapper[4991]: I0929 12:29:17.300409 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:17 crc kubenswrapper[4991]: I0929 12:29:17.358663 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:29:17 crc kubenswrapper[4991]: I0929 12:29:17.543670 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bnrk7" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="registry-server" containerID="cri-o://49e3d4a2b07b955f9cb46ccabaaa709f26c56dea8b8807bbd464278a7ba8f886" gracePeriod=2 Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.562636 4991 generic.go:334] "Generic (PLEG): container finished" podID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerID="49e3d4a2b07b955f9cb46ccabaaa709f26c56dea8b8807bbd464278a7ba8f886" exitCode=0 Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.562688 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerDied","Data":"49e3d4a2b07b955f9cb46ccabaaa709f26c56dea8b8807bbd464278a7ba8f886"} Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.722039 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.838505 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w8w5\" (UniqueName: \"kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5\") pod \"85a72ff7-fb58-4ac9-8557-2d19a8822359\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.839089 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities\") pod \"85a72ff7-fb58-4ac9-8557-2d19a8822359\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.839279 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content\") pod \"85a72ff7-fb58-4ac9-8557-2d19a8822359\" (UID: \"85a72ff7-fb58-4ac9-8557-2d19a8822359\") " Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.839689 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities" (OuterVolumeSpecName: "utilities") pod "85a72ff7-fb58-4ac9-8557-2d19a8822359" (UID: "85a72ff7-fb58-4ac9-8557-2d19a8822359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.840086 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.844749 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5" (OuterVolumeSpecName: "kube-api-access-4w8w5") pod "85a72ff7-fb58-4ac9-8557-2d19a8822359" (UID: "85a72ff7-fb58-4ac9-8557-2d19a8822359"). InnerVolumeSpecName "kube-api-access-4w8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.854251 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85a72ff7-fb58-4ac9-8557-2d19a8822359" (UID: "85a72ff7-fb58-4ac9-8557-2d19a8822359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.942205 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a72ff7-fb58-4ac9-8557-2d19a8822359-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:18 crc kubenswrapper[4991]: I0929 12:29:18.942241 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w8w5\" (UniqueName: \"kubernetes.io/projected/85a72ff7-fb58-4ac9-8557-2d19a8822359-kube-api-access-4w8w5\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.576014 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnrk7" event={"ID":"85a72ff7-fb58-4ac9-8557-2d19a8822359","Type":"ContainerDied","Data":"fd613288a78d8861eb8936ed8920116b12cacb5f7b159266bada621854bea488"} Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.576071 4991 scope.go:117] "RemoveContainer" containerID="49e3d4a2b07b955f9cb46ccabaaa709f26c56dea8b8807bbd464278a7ba8f886" Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.576079 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnrk7" Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.603383 4991 scope.go:117] "RemoveContainer" containerID="a7288c59d93d55081a4283793c6092a6a799f33272ce8af617d9235fecffb9e7" Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.617760 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.623762 4991 scope.go:117] "RemoveContainer" containerID="dbc5659432c2a279b674c48052492ce091aacca24b109010e051ca294d4aa5c1" Sep 29 12:29:19 crc kubenswrapper[4991]: I0929 12:29:19.627898 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnrk7"] Sep 29 12:29:20 crc kubenswrapper[4991]: I0929 12:29:20.939123 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" path="/var/lib/kubelet/pods/85a72ff7-fb58-4ac9-8557-2d19a8822359/volumes" Sep 29 12:29:22 crc kubenswrapper[4991]: I0929 12:29:22.522070 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ws9hg" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" probeResult="failure" output=< Sep 29 12:29:22 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 12:29:22 crc kubenswrapper[4991]: > Sep 29 12:29:31 crc kubenswrapper[4991]: I0929 12:29:31.513073 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:31 crc kubenswrapper[4991]: I0929 12:29:31.575956 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:31 crc kubenswrapper[4991]: I0929 12:29:31.750539 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:29:32 crc kubenswrapper[4991]: I0929 12:29:32.711929 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ws9hg" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" containerID="cri-o://4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323" gracePeriod=2 Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.325048 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.378427 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content\") pod \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.474032 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa5bbced-2568-4bd0-a2b7-a3b783a933e4" (UID: "fa5bbced-2568-4bd0-a2b7-a3b783a933e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.480563 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whz69\" (UniqueName: \"kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69\") pod \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.480687 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities\") pod \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\" (UID: \"fa5bbced-2568-4bd0-a2b7-a3b783a933e4\") " Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.481395 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.481618 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities" (OuterVolumeSpecName: "utilities") pod "fa5bbced-2568-4bd0-a2b7-a3b783a933e4" (UID: "fa5bbced-2568-4bd0-a2b7-a3b783a933e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.488397 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69" (OuterVolumeSpecName: "kube-api-access-whz69") pod "fa5bbced-2568-4bd0-a2b7-a3b783a933e4" (UID: "fa5bbced-2568-4bd0-a2b7-a3b783a933e4"). InnerVolumeSpecName "kube-api-access-whz69". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.582412 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whz69\" (UniqueName: \"kubernetes.io/projected/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-kube-api-access-whz69\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.582678 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa5bbced-2568-4bd0-a2b7-a3b783a933e4-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.724475 4991 generic.go:334] "Generic (PLEG): container finished" podID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerID="4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323" exitCode=0 Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.724526 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerDied","Data":"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323"} Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.724558 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws9hg" event={"ID":"fa5bbced-2568-4bd0-a2b7-a3b783a933e4","Type":"ContainerDied","Data":"0f1783ecddfd1fe2bf3332e45a192858225ef39084d99efd65c865a8d64a6e21"} Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.724577 4991 scope.go:117] "RemoveContainer" containerID="4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.725171 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws9hg" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.765164 4991 scope.go:117] "RemoveContainer" containerID="0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.771707 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.784603 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ws9hg"] Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.790771 4991 scope.go:117] "RemoveContainer" containerID="c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.842434 4991 scope.go:117] "RemoveContainer" containerID="4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323" Sep 29 12:29:33 crc kubenswrapper[4991]: E0929 12:29:33.842968 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323\": container with ID starting with 4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323 not found: ID does not exist" containerID="4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.843008 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323"} err="failed to get container status \"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323\": rpc error: code = NotFound desc = could not find container \"4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323\": container with ID starting with 4e7fbf2c88a16d3080534b0f1f5bcc6c8deeb637c2129ae271b646e081903323 not found: ID does not exist" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.843033 4991 scope.go:117] "RemoveContainer" containerID="0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc" Sep 29 12:29:33 crc kubenswrapper[4991]: E0929 12:29:33.843458 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc\": container with ID starting with 0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc not found: ID does not exist" containerID="0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.843485 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc"} err="failed to get container status \"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc\": rpc error: code = NotFound desc = could not find container \"0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc\": container with ID starting with 0aeaad1d115c5a85b29cd1cb4c2e75e5123041a5862a02d8e1ad69bbef0224bc not found: ID does not exist" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.843503 4991 scope.go:117] "RemoveContainer" containerID="c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf" Sep 29 12:29:33 crc kubenswrapper[4991]: E0929 12:29:33.843906 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf\": container with ID starting with c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf not found: ID does not exist" containerID="c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf" Sep 29 12:29:33 crc kubenswrapper[4991]: I0929 12:29:33.843937 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf"} err="failed to get container status \"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf\": rpc error: code = NotFound desc = could not find container \"c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf\": container with ID starting with c50669e7acf068df57f29aea7ca64a26c09991299bb12c9d08f9da34947da9bf not found: ID does not exist" Sep 29 12:29:34 crc kubenswrapper[4991]: I0929 12:29:34.951145 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" path="/var/lib/kubelet/pods/fa5bbced-2568-4bd0-a2b7-a3b783a933e4/volumes" Sep 29 12:29:37 crc kubenswrapper[4991]: I0929 12:29:37.946592 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:29:37 crc kubenswrapper[4991]: I0929 12:29:37.946904 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:29:37 crc kubenswrapper[4991]: I0929 12:29:37.946966 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:29:37 crc kubenswrapper[4991]: I0929 12:29:37.948204 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:29:37 crc kubenswrapper[4991]: I0929 12:29:37.948264 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" gracePeriod=600 Sep 29 12:29:38 crc kubenswrapper[4991]: E0929 12:29:38.069638 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:29:38 crc kubenswrapper[4991]: I0929 12:29:38.778701 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" exitCode=0 Sep 29 12:29:38 crc kubenswrapper[4991]: I0929 12:29:38.778766 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a"} Sep 29 12:29:38 crc kubenswrapper[4991]: I0929 12:29:38.778803 4991 scope.go:117] "RemoveContainer" containerID="d0610ed590b2dc9f43b36c56ae7410446103a75e1b4f396f1ae20a906757b2c5" Sep 29 12:29:38 crc kubenswrapper[4991]: I0929 12:29:38.779802 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:29:38 crc kubenswrapper[4991]: E0929 12:29:38.780426 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:29:52 crc kubenswrapper[4991]: I0929 12:29:52.927834 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:29:52 crc kubenswrapper[4991]: E0929 12:29:52.928676 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.195659 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th"] Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.196867 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.196887 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.196905 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.196915 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.196963 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.196973 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.196994 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197003 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.197039 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197050 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.197068 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197075 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.197091 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197099 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.197123 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197131 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="extract-content" Sep 29 12:30:00 crc kubenswrapper[4991]: E0929 12:30:00.197152 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197161 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="extract-utilities" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197450 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5bbced-2568-4bd0-a2b7-a3b783a933e4" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197481 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a72ff7-fb58-4ac9-8557-2d19a8822359" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.197500 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8deaa8-f4e3-4dd2-b46e-26443f5a9157" containerName="registry-server" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.198552 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.203095 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.205864 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.210404 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th"] Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.292716 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57d29\" (UniqueName: \"kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.292778 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.292903 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.395057 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57d29\" (UniqueName: \"kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.395181 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.395331 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.396602 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.401871 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.412381 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57d29\" (UniqueName: \"kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29\") pod \"collect-profiles-29319150-xr7th\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:00 crc kubenswrapper[4991]: I0929 12:30:00.522808 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:01 crc kubenswrapper[4991]: I0929 12:30:01.042823 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th"] Sep 29 12:30:01 crc kubenswrapper[4991]: W0929 12:30:01.051394 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd9db2a_dfa2_4693_b8df_427fa052568a.slice/crio-6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78 WatchSource:0}: Error finding container 6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78: Status 404 returned error can't find the container with id 6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78 Sep 29 12:30:01 crc kubenswrapper[4991]: E0929 12:30:01.863261 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd9db2a_dfa2_4693_b8df_427fa052568a.slice/crio-4eb8daf2bc6b6d95f7deb2f5eb9712bd5d809ac1d824b44307ba96e4861cffe3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffd9db2a_dfa2_4693_b8df_427fa052568a.slice/crio-conmon-4eb8daf2bc6b6d95f7deb2f5eb9712bd5d809ac1d824b44307ba96e4861cffe3.scope\": RecentStats: unable to find data in memory cache]" Sep 29 12:30:02 crc kubenswrapper[4991]: I0929 12:30:02.031082 4991 generic.go:334] "Generic (PLEG): container finished" podID="ffd9db2a-dfa2-4693-b8df-427fa052568a" containerID="4eb8daf2bc6b6d95f7deb2f5eb9712bd5d809ac1d824b44307ba96e4861cffe3" exitCode=0 Sep 29 12:30:02 crc kubenswrapper[4991]: I0929 12:30:02.031123 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" event={"ID":"ffd9db2a-dfa2-4693-b8df-427fa052568a","Type":"ContainerDied","Data":"4eb8daf2bc6b6d95f7deb2f5eb9712bd5d809ac1d824b44307ba96e4861cffe3"} Sep 29 12:30:02 crc kubenswrapper[4991]: I0929 12:30:02.031149 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" event={"ID":"ffd9db2a-dfa2-4693-b8df-427fa052568a","Type":"ContainerStarted","Data":"6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78"} Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.463901 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.585115 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume\") pod \"ffd9db2a-dfa2-4693-b8df-427fa052568a\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.585169 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume\") pod \"ffd9db2a-dfa2-4693-b8df-427fa052568a\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.585297 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57d29\" (UniqueName: \"kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29\") pod \"ffd9db2a-dfa2-4693-b8df-427fa052568a\" (UID: \"ffd9db2a-dfa2-4693-b8df-427fa052568a\") " Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.586273 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffd9db2a-dfa2-4693-b8df-427fa052568a" (UID: "ffd9db2a-dfa2-4693-b8df-427fa052568a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.591654 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29" (OuterVolumeSpecName: "kube-api-access-57d29") pod "ffd9db2a-dfa2-4693-b8df-427fa052568a" (UID: "ffd9db2a-dfa2-4693-b8df-427fa052568a"). InnerVolumeSpecName "kube-api-access-57d29". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.596610 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffd9db2a-dfa2-4693-b8df-427fa052568a" (UID: "ffd9db2a-dfa2-4693-b8df-427fa052568a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.689530 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57d29\" (UniqueName: \"kubernetes.io/projected/ffd9db2a-dfa2-4693-b8df-427fa052568a-kube-api-access-57d29\") on node \"crc\" DevicePath \"\"" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.689570 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffd9db2a-dfa2-4693-b8df-427fa052568a-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:30:03 crc kubenswrapper[4991]: I0929 12:30:03.689584 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd9db2a-dfa2-4693-b8df-427fa052568a-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.061843 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" event={"ID":"ffd9db2a-dfa2-4693-b8df-427fa052568a","Type":"ContainerDied","Data":"6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78"} Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.062149 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be6e129426d50bf6eca19405cec866bb22d829ed58bf0c14cca249064b9ed78" Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.061934 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319150-xr7th" Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.543379 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq"] Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.552635 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319105-qw4sq"] Sep 29 12:30:04 crc kubenswrapper[4991]: I0929 12:30:04.942140 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3da2ced-b1b2-46eb-813d-d6edcab246d8" path="/var/lib/kubelet/pods/d3da2ced-b1b2-46eb-813d-d6edcab246d8/volumes" Sep 29 12:30:05 crc kubenswrapper[4991]: I0929 12:30:05.926174 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:30:05 crc kubenswrapper[4991]: E0929 12:30:05.926749 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:30:16 crc kubenswrapper[4991]: I0929 12:30:16.926987 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:30:16 crc kubenswrapper[4991]: E0929 12:30:16.928029 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:30:28 crc kubenswrapper[4991]: I0929 12:30:28.405195 4991 scope.go:117] "RemoveContainer" containerID="45d07fe9ad5a6c220243536a0851bbe2214d0d1a19880e205204380afc4067e5" Sep 29 12:30:30 crc kubenswrapper[4991]: I0929 12:30:30.926521 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:30:30 crc kubenswrapper[4991]: E0929 12:30:30.927270 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:30:41 crc kubenswrapper[4991]: I0929 12:30:41.927543 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:30:41 crc kubenswrapper[4991]: E0929 12:30:41.928462 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:30:55 crc kubenswrapper[4991]: I0929 12:30:55.926499 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:30:55 crc kubenswrapper[4991]: E0929 12:30:55.927623 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:31:08 crc kubenswrapper[4991]: I0929 12:31:08.926427 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:31:08 crc kubenswrapper[4991]: E0929 12:31:08.927130 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:31:20 crc kubenswrapper[4991]: I0929 12:31:20.926209 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:31:20 crc kubenswrapper[4991]: E0929 12:31:20.927172 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:31:31 crc kubenswrapper[4991]: I0929 12:31:31.926935 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:31:31 crc kubenswrapper[4991]: E0929 12:31:31.929445 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:31:42 crc kubenswrapper[4991]: I0929 12:31:42.926484 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:31:42 crc kubenswrapper[4991]: E0929 12:31:42.927612 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:31:53 crc kubenswrapper[4991]: I0929 12:31:53.926969 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:31:53 crc kubenswrapper[4991]: E0929 12:31:53.928751 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:32:08 crc kubenswrapper[4991]: I0929 12:32:08.927181 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:32:08 crc kubenswrapper[4991]: E0929 12:32:08.928025 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:32:21 crc kubenswrapper[4991]: I0929 12:32:21.926322 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:32:21 crc kubenswrapper[4991]: E0929 12:32:21.927303 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:32:35 crc kubenswrapper[4991]: I0929 12:32:35.927450 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:32:35 crc kubenswrapper[4991]: E0929 12:32:35.928718 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:32:50 crc kubenswrapper[4991]: I0929 12:32:50.926685 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:32:50 crc kubenswrapper[4991]: E0929 12:32:50.927585 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:33:01 crc kubenswrapper[4991]: I0929 12:33:01.927447 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:33:01 crc kubenswrapper[4991]: E0929 12:33:01.928328 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:33:13 crc kubenswrapper[4991]: I0929 12:33:13.927250 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:33:13 crc kubenswrapper[4991]: E0929 12:33:13.928119 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:33:25 crc kubenswrapper[4991]: I0929 12:33:25.926507 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:33:25 crc kubenswrapper[4991]: E0929 12:33:25.927355 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:33:39 crc kubenswrapper[4991]: I0929 12:33:39.927017 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:33:39 crc kubenswrapper[4991]: E0929 12:33:39.927984 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:33:51 crc kubenswrapper[4991]: I0929 12:33:51.926626 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:33:51 crc kubenswrapper[4991]: E0929 12:33:51.927484 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:34:05 crc kubenswrapper[4991]: I0929 12:34:05.927311 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:34:05 crc kubenswrapper[4991]: E0929 12:34:05.928163 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:34:17 crc kubenswrapper[4991]: I0929 12:34:17.926979 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:34:17 crc kubenswrapper[4991]: E0929 12:34:17.927656 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:34:30 crc kubenswrapper[4991]: I0929 12:34:30.927458 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:34:30 crc kubenswrapper[4991]: E0929 12:34:30.928634 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:34:42 crc kubenswrapper[4991]: I0929 12:34:42.926636 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:34:44 crc kubenswrapper[4991]: I0929 12:34:44.144524 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef"} Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.925018 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5br8n/must-gather-dfgzx"] Sep 29 12:34:48 crc kubenswrapper[4991]: E0929 12:34:48.925887 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd9db2a-dfa2-4693-b8df-427fa052568a" containerName="collect-profiles" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.925900 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd9db2a-dfa2-4693-b8df-427fa052568a" containerName="collect-profiles" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.926162 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd9db2a-dfa2-4693-b8df-427fa052568a" containerName="collect-profiles" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.927476 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.940133 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5br8n/must-gather-dfgzx"] Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.943336 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5br8n"/"openshift-service-ca.crt" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.943926 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5br8n"/"kube-root-ca.crt" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.944155 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5br8n"/"default-dockercfg-7drr2" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.975325 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:48 crc kubenswrapper[4991]: I0929 12:34:48.975437 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2p86\" (UniqueName: \"kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.076639 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.076709 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2p86\" (UniqueName: \"kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.077356 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.100632 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2p86\" (UniqueName: \"kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86\") pod \"must-gather-dfgzx\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.258103 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.794844 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5br8n/must-gather-dfgzx"] Sep 29 12:34:49 crc kubenswrapper[4991]: I0929 12:34:49.799543 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 12:34:50 crc kubenswrapper[4991]: I0929 12:34:50.207699 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/must-gather-dfgzx" event={"ID":"949f778d-df38-426a-9ed6-86f2b6e9976c","Type":"ContainerStarted","Data":"d21f5ae00e29048c531242d79385060fbabc5660e1e77c6821b99bd7a11ea615"} Sep 29 12:34:55 crc kubenswrapper[4991]: I0929 12:34:55.291156 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/must-gather-dfgzx" event={"ID":"949f778d-df38-426a-9ed6-86f2b6e9976c","Type":"ContainerStarted","Data":"ddcbd6060e2250a8fc8e7991d564c49d2f8d0df201ded0777c5c6dbd5857d1a3"} Sep 29 12:34:55 crc kubenswrapper[4991]: I0929 12:34:55.291698 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/must-gather-dfgzx" event={"ID":"949f778d-df38-426a-9ed6-86f2b6e9976c","Type":"ContainerStarted","Data":"bffb9a0973a1856738fedd03ffb5f8b5dfc5e4019d464da1a779f40762156e7b"} Sep 29 12:34:55 crc kubenswrapper[4991]: I0929 12:34:55.323541 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5br8n/must-gather-dfgzx" podStartSLOduration=2.873540631 podStartE2EDuration="7.323513586s" podCreationTimestamp="2025-09-29 12:34:48 +0000 UTC" firstStartedPulling="2025-09-29 12:34:49.799230341 +0000 UTC m=+10625.655158369" lastFinishedPulling="2025-09-29 12:34:54.249203296 +0000 UTC m=+10630.105131324" observedRunningTime="2025-09-29 12:34:55.310794784 +0000 UTC m=+10631.166722812" watchObservedRunningTime="2025-09-29 12:34:55.323513586 +0000 UTC m=+10631.179441634" Sep 29 12:34:58 crc kubenswrapper[4991]: I0929 12:34:58.980375 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5br8n/crc-debug-825vc"] Sep 29 12:34:58 crc kubenswrapper[4991]: I0929 12:34:58.982284 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.143654 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v8r\" (UniqueName: \"kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.143744 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.246405 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v8r\" (UniqueName: \"kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.246526 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.246657 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.296825 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v8r\" (UniqueName: \"kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r\") pod \"crc-debug-825vc\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: I0929 12:34:59.316793 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:34:59 crc kubenswrapper[4991]: W0929 12:34:59.373228 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd51887d4_be7d_4824_8931_cb4c6cf2592c.slice/crio-1df9595c3e980a3f160910622e0cdb6ac5fae68be4fe78130a8f2b2c7712fe92 WatchSource:0}: Error finding container 1df9595c3e980a3f160910622e0cdb6ac5fae68be4fe78130a8f2b2c7712fe92: Status 404 returned error can't find the container with id 1df9595c3e980a3f160910622e0cdb6ac5fae68be4fe78130a8f2b2c7712fe92 Sep 29 12:35:00 crc kubenswrapper[4991]: I0929 12:35:00.366226 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-825vc" event={"ID":"d51887d4-be7d-4824-8931-cb4c6cf2592c","Type":"ContainerStarted","Data":"1df9595c3e980a3f160910622e0cdb6ac5fae68be4fe78130a8f2b2c7712fe92"} Sep 29 12:35:15 crc kubenswrapper[4991]: E0929 12:35:15.015249 4991 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Sep 29 12:35:15 crc kubenswrapper[4991]: E0929 12:35:15.017180 4991 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27v8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-825vc_openshift-must-gather-5br8n(d51887d4-be7d-4824-8931-cb4c6cf2592c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 29 12:35:15 crc kubenswrapper[4991]: E0929 12:35:15.018406 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-5br8n/crc-debug-825vc" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" Sep 29 12:35:15 crc kubenswrapper[4991]: E0929 12:35:15.566157 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-5br8n/crc-debug-825vc" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" Sep 29 12:35:31 crc kubenswrapper[4991]: I0929 12:35:31.764220 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-825vc" event={"ID":"d51887d4-be7d-4824-8931-cb4c6cf2592c","Type":"ContainerStarted","Data":"a1f778170d6821c5fa8908786d64eb7f5edb66af607f74fe2b040d149dc7b5cd"} Sep 29 12:35:31 crc kubenswrapper[4991]: I0929 12:35:31.792335 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5br8n/crc-debug-825vc" podStartSLOduration=2.367255489 podStartE2EDuration="33.792312633s" podCreationTimestamp="2025-09-29 12:34:58 +0000 UTC" firstStartedPulling="2025-09-29 12:34:59.37735333 +0000 UTC m=+10635.233281358" lastFinishedPulling="2025-09-29 12:35:30.802410474 +0000 UTC m=+10666.658338502" observedRunningTime="2025-09-29 12:35:31.778691257 +0000 UTC m=+10667.634619295" watchObservedRunningTime="2025-09-29 12:35:31.792312633 +0000 UTC m=+10667.648240671" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.239112 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4e7c2905-fcf9-4e39-9610-63641cffb33f/aodh-api/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.283697 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4e7c2905-fcf9-4e39-9610-63641cffb33f/aodh-evaluator/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.458656 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4e7c2905-fcf9-4e39-9610-63641cffb33f/aodh-listener/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.475086 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4e7c2905-fcf9-4e39-9610-63641cffb33f/aodh-notifier/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.673470 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7695fcf79b-8n9gp_f631ddcf-022d-461f-8f4b-421ffb478b6f/barbican-api/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.704815 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7695fcf79b-8n9gp_f631ddcf-022d-461f-8f4b-421ffb478b6f/barbican-api-log/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.870383 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f48988744-sgtlp_244c0192-f2fc-4987-a2f5-2070d703c35c/barbican-keystone-listener/0.log" Sep 29 12:36:33 crc kubenswrapper[4991]: I0929 12:36:33.963515 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f48988744-sgtlp_244c0192-f2fc-4987-a2f5-2070d703c35c/barbican-keystone-listener-log/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.133345 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c569f5685-dqm5h_c4924ede-bd54-46eb-ae9f-90f018709c3c/barbican-worker/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.201513 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c569f5685-dqm5h_c4924ede-bd54-46eb-ae9f-90f018709c3c/barbican-worker-log/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.421513 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fb9tj_4f367761-a0c4-4bc8-8d44-86dc07e3d495/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.690461 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b194689f-6d7e-47aa-b796-7f0e959ce6b1/ceilometer-notification-agent/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.715194 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b194689f-6d7e-47aa-b796-7f0e959ce6b1/ceilometer-central-agent/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.829429 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b194689f-6d7e-47aa-b796-7f0e959ce6b1/proxy-httpd/0.log" Sep 29 12:36:34 crc kubenswrapper[4991]: I0929 12:36:34.984798 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b194689f-6d7e-47aa-b796-7f0e959ce6b1/sg-core/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.248904 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e010d53d-2afc-49b5-ad9a-09054a8855a1/cinder-api/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.291186 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e010d53d-2afc-49b5-ad9a-09054a8855a1/cinder-api-log/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.570184 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e52c4ee5-d723-4579-8ed4-b49d783c3f9f/cinder-scheduler/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.679424 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e52c4ee5-d723-4579-8ed4-b49d783c3f9f/probe/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.810618 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bfwgj_0c180725-3745-4c34-b5a2-2aa954097b80/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:35 crc kubenswrapper[4991]: I0929 12:36:35.858875 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-tcwnp_09d897dc-cbe0-442f-8a74-b557e900c2c9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.048213 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hrbsx_ab8e114e-a9e2-4847-a068-fcb601984824/init/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.214271 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hrbsx_ab8e114e-a9e2-4847-a068-fcb601984824/init/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.316616 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-hrbsx_ab8e114e-a9e2-4847-a068-fcb601984824/dnsmasq-dns/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.340121 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-75tdc_2658581c-8b2e-435e-8c5a-8506dc7b2134/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.563700 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5dbbf03-2962-4e7c-9e79-516a9222cabc/glance-httpd/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.614207 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5dbbf03-2962-4e7c-9e79-516a9222cabc/glance-log/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.823975 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2c1d0fd8-5e0c-4dde-b023-b288e5fc6318/glance-httpd/0.log" Sep 29 12:36:36 crc kubenswrapper[4991]: I0929 12:36:36.860449 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2c1d0fd8-5e0c-4dde-b023-b288e5fc6318/glance-log/0.log" Sep 29 12:36:37 crc kubenswrapper[4991]: I0929 12:36:37.436823 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-64f8dfd87f-r5cr8_e7a76469-fffa-42c3-9d5d-3202e335a666/heat-engine/0.log" Sep 29 12:36:37 crc kubenswrapper[4991]: I0929 12:36:37.707681 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bdwtw_5580bad6-d202-4da7-95d0-806deb62544c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:37 crc kubenswrapper[4991]: I0929 12:36:37.966354 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-72d6p_c6e6b0f0-8b72-4494-97cf-307a33dc67ca/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:38 crc kubenswrapper[4991]: I0929 12:36:38.931420 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6548597c77-4fgjb_539b6968-90d6-45cc-ab26-cf0ab8ed31f6/heat-api/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.017277 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74c8bd59d-2nxsz_2ff464af-ef5c-461b-aded-788ef2061aa6/keystone-api/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.102542 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-65b855f985-v4s5j_a4195301-45cb-494f-82a8-756730309e65/heat-cfnapi/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.155382 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29319001-8xkxg_02ed8488-a798-4998-8759-3f5346451268/keystone-cron/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.320441 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29319061-swvvf_b44ec6ab-d0b6-42c5-9c64-48e245f4241c/keystone-cron/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.411124 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29319121-9z692_b975c3b8-401b-49cd-be70-93912c9a61da/keystone-cron/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.675383 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8c49db59-0819-4c3d-922d-04d10e579618/kube-state-metrics/0.log" Sep 29 12:36:39 crc kubenswrapper[4991]: I0929 12:36:39.706091 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-s5dnz_e3d94097-7104-4678-87b7-28f90003a13f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:40 crc kubenswrapper[4991]: I0929 12:36:40.123657 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-l8gw6_f4ad28ea-5728-404a-95fb-38793b064167/logging-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:40 crc kubenswrapper[4991]: I0929 12:36:40.376117 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_13623a3a-f409-4086-acc7-af839f51a45b/mysqld-exporter/0.log" Sep 29 12:36:40 crc kubenswrapper[4991]: I0929 12:36:40.665923 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-797678c8fc-75zg9_cf4d4464-ee15-4786-beaf-b4fa9d320834/neutron-api/0.log" Sep 29 12:36:41 crc kubenswrapper[4991]: I0929 12:36:41.582431 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jlh5d_f84e8053-85fb-4841-b585-ce6d6ecb8e45/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:41 crc kubenswrapper[4991]: I0929 12:36:41.651531 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-797678c8fc-75zg9_cf4d4464-ee15-4786-beaf-b4fa9d320834/neutron-httpd/0.log" Sep 29 12:36:42 crc kubenswrapper[4991]: I0929 12:36:42.332233 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_103ec465-3307-403d-ad9a-1a69ebc64398/nova-api-log/0.log" Sep 29 12:36:42 crc kubenswrapper[4991]: I0929 12:36:42.472049 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_28e75f85-bdef-4ae6-9eca-a4bdc403c679/nova-cell0-conductor-conductor/0.log" Sep 29 12:36:42 crc kubenswrapper[4991]: I0929 12:36:42.909538 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_83ac6e99-3bdd-44a3-8012-20dc4d70740e/nova-cell1-conductor-conductor/0.log" Sep 29 12:36:43 crc kubenswrapper[4991]: I0929 12:36:43.266019 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b4e6fdce-98a4-4477-b438-b93af1ab5e5b/nova-cell1-novncproxy-novncproxy/0.log" Sep 29 12:36:43 crc kubenswrapper[4991]: I0929 12:36:43.469187 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_103ec465-3307-403d-ad9a-1a69ebc64398/nova-api-api/0.log" Sep 29 12:36:43 crc kubenswrapper[4991]: I0929 12:36:43.515893 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bm9jv_cc36ac87-f748-4f5e-8510-c91a26bafce9/nova-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:43 crc kubenswrapper[4991]: I0929 12:36:43.780094 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690/nova-metadata-log/0.log" Sep 29 12:36:44 crc kubenswrapper[4991]: I0929 12:36:44.240123 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_593ca329-b7a2-400c-8910-7f37bd321c6c/nova-scheduler-scheduler/0.log" Sep 29 12:36:44 crc kubenswrapper[4991]: I0929 12:36:44.404718 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6920a7a6-0725-4512-8b3c-dcf7ba2c8587/mysql-bootstrap/0.log" Sep 29 12:36:44 crc kubenswrapper[4991]: I0929 12:36:44.637707 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6920a7a6-0725-4512-8b3c-dcf7ba2c8587/mysql-bootstrap/0.log" Sep 29 12:36:44 crc kubenswrapper[4991]: I0929 12:36:44.685200 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6920a7a6-0725-4512-8b3c-dcf7ba2c8587/galera/0.log" Sep 29 12:36:44 crc kubenswrapper[4991]: I0929 12:36:44.976454 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4a8f2f9-4e24-4728-bf11-a0b6f50094d1/mysql-bootstrap/0.log" Sep 29 12:36:45 crc kubenswrapper[4991]: I0929 12:36:45.258404 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4a8f2f9-4e24-4728-bf11-a0b6f50094d1/mysql-bootstrap/0.log" Sep 29 12:36:45 crc kubenswrapper[4991]: I0929 12:36:45.324350 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4a8f2f9-4e24-4728-bf11-a0b6f50094d1/galera/0.log" Sep 29 12:36:45 crc kubenswrapper[4991]: I0929 12:36:45.540850 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_fa48bcb8-1683-4a76-b721-59149bc4e240/openstackclient/0.log" Sep 29 12:36:45 crc kubenswrapper[4991]: I0929 12:36:45.763139 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-798np_1abeed70-62ab-4d1b-811b-330c2554c1d9/openstack-network-exporter/0.log" Sep 29 12:36:46 crc kubenswrapper[4991]: I0929 12:36:46.140496 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rr8td_11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44/ovsdb-server-init/0.log" Sep 29 12:36:46 crc kubenswrapper[4991]: I0929 12:36:46.369516 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rr8td_11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44/ovs-vswitchd/0.log" Sep 29 12:36:46 crc kubenswrapper[4991]: I0929 12:36:46.462867 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rr8td_11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44/ovsdb-server-init/0.log" Sep 29 12:36:46 crc kubenswrapper[4991]: I0929 12:36:46.624341 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rr8td_11d4b3a7-bf2d-419e-92fb-0e8cd0af0e44/ovsdb-server/0.log" Sep 29 12:36:47 crc kubenswrapper[4991]: I0929 12:36:47.161215 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zmljr_d70600eb-2ef3-4db2-a1e6-050e76f25e79/ovn-controller/0.log" Sep 29 12:36:47 crc kubenswrapper[4991]: I0929 12:36:47.480045 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vl5z5_ad453fb9-39ab-44af-99e0-4b5f416fd015/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:47 crc kubenswrapper[4991]: I0929 12:36:47.687819 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9906e62d-6d99-4043-9fce-d198974e73bc/openstack-network-exporter/0.log" Sep 29 12:36:47 crc kubenswrapper[4991]: I0929 12:36:47.777074 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9906e62d-6d99-4043-9fce-d198974e73bc/ovn-northd/0.log" Sep 29 12:36:48 crc kubenswrapper[4991]: I0929 12:36:48.030885 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4834ad3-a9f2-47e2-b3ec-6cac39b88fef/openstack-network-exporter/0.log" Sep 29 12:36:48 crc kubenswrapper[4991]: I0929 12:36:48.300795 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4834ad3-a9f2-47e2-b3ec-6cac39b88fef/ovsdbserver-nb/0.log" Sep 29 12:36:49 crc kubenswrapper[4991]: I0929 12:36:49.053653 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_463a171c-5755-4898-a973-f72db6a319f0/openstack-network-exporter/0.log" Sep 29 12:36:49 crc kubenswrapper[4991]: I0929 12:36:49.111354 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_463a171c-5755-4898-a973-f72db6a319f0/ovsdbserver-sb/0.log" Sep 29 12:36:49 crc kubenswrapper[4991]: I0929 12:36:49.296292 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d8c860b5-f1a5-4e1d-a0e7-d203a9bcf690/nova-metadata-metadata/0.log" Sep 29 12:36:49 crc kubenswrapper[4991]: I0929 12:36:49.625130 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c57c9496-sh67d_4f154fb6-f887-4766-8374-5956d39b267c/placement-api/0.log" Sep 29 12:36:50 crc kubenswrapper[4991]: I0929 12:36:50.062032 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c003699b-006e-4a8b-9de5-2a153984ed1a/init-config-reloader/0.log" Sep 29 12:36:50 crc kubenswrapper[4991]: I0929 12:36:50.104663 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c57c9496-sh67d_4f154fb6-f887-4766-8374-5956d39b267c/placement-log/0.log" Sep 29 12:36:50 crc kubenswrapper[4991]: I0929 12:36:50.236713 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c003699b-006e-4a8b-9de5-2a153984ed1a/init-config-reloader/0.log" Sep 29 12:36:50 crc kubenswrapper[4991]: I0929 12:36:50.315578 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c003699b-006e-4a8b-9de5-2a153984ed1a/config-reloader/0.log" Sep 29 12:36:50 crc kubenswrapper[4991]: I0929 12:36:50.401120 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c003699b-006e-4a8b-9de5-2a153984ed1a/prometheus/0.log" Sep 29 12:36:51 crc kubenswrapper[4991]: I0929 12:36:51.409808 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c003699b-006e-4a8b-9de5-2a153984ed1a/thanos-sidecar/0.log" Sep 29 12:36:51 crc kubenswrapper[4991]: I0929 12:36:51.603293 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0/setup-container/0.log" Sep 29 12:36:51 crc kubenswrapper[4991]: I0929 12:36:51.822318 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0/setup-container/0.log" Sep 29 12:36:51 crc kubenswrapper[4991]: I0929 12:36:51.976861 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e731a49f-4ae8-4f37-b8c6-e7ab6af02cf0/rabbitmq/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.032887 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf158d99-d08f-4ce7-b60d-2c685d55a6f7/setup-container/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.312277 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf158d99-d08f-4ce7-b60d-2c685d55a6f7/setup-container/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.472747 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf158d99-d08f-4ce7-b60d-2c685d55a6f7/rabbitmq/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.606872 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vzgcz_2f5c32cd-1b0e-49af-a986-5dbb4d56db1d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.756616 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r9wsd_b8f23d5d-c47d-481f-a28e-941d77c414c5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:52 crc kubenswrapper[4991]: I0929 12:36:52.914468 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kgp7r_e5729608-f9e1-4a51-8fa4-682be8932037/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:53 crc kubenswrapper[4991]: I0929 12:36:53.172171 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zl8lz_67b7b9fe-c0f0-473c-b67d-dcfbea22c02a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:53 crc kubenswrapper[4991]: I0929 12:36:53.241874 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vp6db_c9f586c6-bbb9-4d95-96e9-e284bf421bc5/ssh-known-hosts-edpm-deployment/0.log" Sep 29 12:36:53 crc kubenswrapper[4991]: I0929 12:36:53.547145 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67c9dd4f47-ndrxn_ce4f4bfc-55ef-4360-905e-df84e5d932b2/proxy-server/0.log" Sep 29 12:36:53 crc kubenswrapper[4991]: I0929 12:36:53.789842 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zftlr_cf1a0436-f3cf-4582-9634-58f354d9badf/swift-ring-rebalance/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.013403 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/account-auditor/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.056976 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-67c9dd4f47-ndrxn_ce4f4bfc-55ef-4360-905e-df84e5d932b2/proxy-httpd/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.192370 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/account-reaper/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.320361 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/account-server/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.362303 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/account-replicator/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.393107 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/container-auditor/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.600034 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/container-server/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.624925 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/container-updater/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.711247 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/container-replicator/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.910425 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/object-auditor/0.log" Sep 29 12:36:54 crc kubenswrapper[4991]: I0929 12:36:54.912141 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/object-expirer/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.098165 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/object-updater/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.112414 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/object-replicator/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.140311 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/object-server/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.304648 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/swift-recon-cron/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.306480 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_81d702cb-530c-441d-b686-f205337a2aea/rsync/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.623860 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wxw4c_ab589228-3979-4001-886d-8c94abef0c13/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.690164 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-hwsm5_e5fc16c9-bdf7-4816-8ae0-0d3c2b4a082b/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:36:55 crc kubenswrapper[4991]: I0929 12:36:55.863255 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5wsbh_e4f55ea7-928c-4334-82da-f530b8a0d8a8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 29 12:37:06 crc kubenswrapper[4991]: I0929 12:37:06.901734 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_789431f4-5a0e-4baf-a88e-de5af2905c04/memcached/0.log" Sep 29 12:37:07 crc kubenswrapper[4991]: I0929 12:37:07.946919 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:37:07 crc kubenswrapper[4991]: I0929 12:37:07.947284 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:37:37 crc kubenswrapper[4991]: I0929 12:37:37.946831 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:37:37 crc kubenswrapper[4991]: I0929 12:37:37.947441 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:38:07 crc kubenswrapper[4991]: I0929 12:38:07.947048 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:38:07 crc kubenswrapper[4991]: I0929 12:38:07.947611 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:38:07 crc kubenswrapper[4991]: I0929 12:38:07.947665 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:38:07 crc kubenswrapper[4991]: I0929 12:38:07.948793 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:38:07 crc kubenswrapper[4991]: I0929 12:38:07.948849 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef" gracePeriod=600 Sep 29 12:38:08 crc kubenswrapper[4991]: I0929 12:38:08.801571 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef" exitCode=0 Sep 29 12:38:08 crc kubenswrapper[4991]: I0929 12:38:08.801664 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef"} Sep 29 12:38:08 crc kubenswrapper[4991]: I0929 12:38:08.802126 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerStarted","Data":"fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450"} Sep 29 12:38:08 crc kubenswrapper[4991]: I0929 12:38:08.802149 4991 scope.go:117] "RemoveContainer" containerID="0c09d59e9983824944c391177b464d06150fd7d72c028403cbe56412f4daa20a" Sep 29 12:38:34 crc kubenswrapper[4991]: I0929 12:38:34.117747 4991 generic.go:334] "Generic (PLEG): container finished" podID="d51887d4-be7d-4824-8931-cb4c6cf2592c" containerID="a1f778170d6821c5fa8908786d64eb7f5edb66af607f74fe2b040d149dc7b5cd" exitCode=0 Sep 29 12:38:34 crc kubenswrapper[4991]: I0929 12:38:34.117838 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-825vc" event={"ID":"d51887d4-be7d-4824-8931-cb4c6cf2592c","Type":"ContainerDied","Data":"a1f778170d6821c5fa8908786d64eb7f5edb66af607f74fe2b040d149dc7b5cd"} Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.277592 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.315866 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-825vc"] Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.327002 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-825vc"] Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.406710 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27v8r\" (UniqueName: \"kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r\") pod \"d51887d4-be7d-4824-8931-cb4c6cf2592c\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.406998 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host\") pod \"d51887d4-be7d-4824-8931-cb4c6cf2592c\" (UID: \"d51887d4-be7d-4824-8931-cb4c6cf2592c\") " Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.407187 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host" (OuterVolumeSpecName: "host") pod "d51887d4-be7d-4824-8931-cb4c6cf2592c" (UID: "d51887d4-be7d-4824-8931-cb4c6cf2592c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.407582 4991 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d51887d4-be7d-4824-8931-cb4c6cf2592c-host\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.413526 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r" (OuterVolumeSpecName: "kube-api-access-27v8r") pod "d51887d4-be7d-4824-8931-cb4c6cf2592c" (UID: "d51887d4-be7d-4824-8931-cb4c6cf2592c"). InnerVolumeSpecName "kube-api-access-27v8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:38:35 crc kubenswrapper[4991]: I0929 12:38:35.510300 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27v8r\" (UniqueName: \"kubernetes.io/projected/d51887d4-be7d-4824-8931-cb4c6cf2592c-kube-api-access-27v8r\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.142603 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df9595c3e980a3f160910622e0cdb6ac5fae68be4fe78130a8f2b2c7712fe92" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.142703 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-825vc" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.488113 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5br8n/crc-debug-vj4l7"] Sep 29 12:38:36 crc kubenswrapper[4991]: E0929 12:38:36.489885 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" containerName="container-00" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.490047 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" containerName="container-00" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.490529 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" containerName="container-00" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.491927 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.636139 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzhc\" (UniqueName: \"kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.636405 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.739088 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzhc\" (UniqueName: \"kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.739206 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.739525 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.762681 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzhc\" (UniqueName: \"kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc\") pod \"crc-debug-vj4l7\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.811043 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:36 crc kubenswrapper[4991]: I0929 12:38:36.939007 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51887d4-be7d-4824-8931-cb4c6cf2592c" path="/var/lib/kubelet/pods/d51887d4-be7d-4824-8931-cb4c6cf2592c/volumes" Sep 29 12:38:37 crc kubenswrapper[4991]: I0929 12:38:37.156494 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" event={"ID":"8e66a87a-93ca-4786-a7b5-841ec84e706f","Type":"ContainerStarted","Data":"e2d76711d72dd80abb9c34d462850c919c0aa473b14700af182b41e1642be5f6"} Sep 29 12:38:37 crc kubenswrapper[4991]: I0929 12:38:37.156559 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" event={"ID":"8e66a87a-93ca-4786-a7b5-841ec84e706f","Type":"ContainerStarted","Data":"f32889a1db5cca2a4831cc82cee412bd0fe580a1811b2aeafb8eb00f3376431f"} Sep 29 12:38:37 crc kubenswrapper[4991]: I0929 12:38:37.187039 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" podStartSLOduration=1.187017115 podStartE2EDuration="1.187017115s" podCreationTimestamp="2025-09-29 12:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:38:37.171577843 +0000 UTC m=+10853.027505871" watchObservedRunningTime="2025-09-29 12:38:37.187017115 +0000 UTC m=+10853.042945143" Sep 29 12:38:38 crc kubenswrapper[4991]: I0929 12:38:38.169423 4991 generic.go:334] "Generic (PLEG): container finished" podID="8e66a87a-93ca-4786-a7b5-841ec84e706f" containerID="e2d76711d72dd80abb9c34d462850c919c0aa473b14700af182b41e1642be5f6" exitCode=0 Sep 29 12:38:38 crc kubenswrapper[4991]: I0929 12:38:38.169758 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" event={"ID":"8e66a87a-93ca-4786-a7b5-841ec84e706f","Type":"ContainerDied","Data":"e2d76711d72dd80abb9c34d462850c919c0aa473b14700af182b41e1642be5f6"} Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.307280 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.409313 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hzhc\" (UniqueName: \"kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc\") pod \"8e66a87a-93ca-4786-a7b5-841ec84e706f\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.409593 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host\") pod \"8e66a87a-93ca-4786-a7b5-841ec84e706f\" (UID: \"8e66a87a-93ca-4786-a7b5-841ec84e706f\") " Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.409812 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host" (OuterVolumeSpecName: "host") pod "8e66a87a-93ca-4786-a7b5-841ec84e706f" (UID: "8e66a87a-93ca-4786-a7b5-841ec84e706f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.410352 4991 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e66a87a-93ca-4786-a7b5-841ec84e706f-host\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.421946 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc" (OuterVolumeSpecName: "kube-api-access-4hzhc") pod "8e66a87a-93ca-4786-a7b5-841ec84e706f" (UID: "8e66a87a-93ca-4786-a7b5-841ec84e706f"). InnerVolumeSpecName "kube-api-access-4hzhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:38:39 crc kubenswrapper[4991]: I0929 12:38:39.511995 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hzhc\" (UniqueName: \"kubernetes.io/projected/8e66a87a-93ca-4786-a7b5-841ec84e706f-kube-api-access-4hzhc\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:40 crc kubenswrapper[4991]: I0929 12:38:40.195561 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" event={"ID":"8e66a87a-93ca-4786-a7b5-841ec84e706f","Type":"ContainerDied","Data":"f32889a1db5cca2a4831cc82cee412bd0fe580a1811b2aeafb8eb00f3376431f"} Sep 29 12:38:40 crc kubenswrapper[4991]: I0929 12:38:40.195610 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32889a1db5cca2a4831cc82cee412bd0fe580a1811b2aeafb8eb00f3376431f" Sep 29 12:38:40 crc kubenswrapper[4991]: I0929 12:38:40.195673 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-vj4l7" Sep 29 12:38:46 crc kubenswrapper[4991]: I0929 12:38:46.320756 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-vj4l7"] Sep 29 12:38:46 crc kubenswrapper[4991]: I0929 12:38:46.330483 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-vj4l7"] Sep 29 12:38:46 crc kubenswrapper[4991]: I0929 12:38:46.945642 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e66a87a-93ca-4786-a7b5-841ec84e706f" path="/var/lib/kubelet/pods/8e66a87a-93ca-4786-a7b5-841ec84e706f/volumes" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.588866 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5br8n/crc-debug-cjqnp"] Sep 29 12:38:47 crc kubenswrapper[4991]: E0929 12:38:47.589393 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e66a87a-93ca-4786-a7b5-841ec84e706f" containerName="container-00" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.589406 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e66a87a-93ca-4786-a7b5-841ec84e706f" containerName="container-00" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.589636 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e66a87a-93ca-4786-a7b5-841ec84e706f" containerName="container-00" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.590640 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.700518 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hv4\" (UniqueName: \"kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.700749 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.802986 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hv4\" (UniqueName: \"kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.803104 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.803320 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.822234 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hv4\" (UniqueName: \"kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4\") pod \"crc-debug-cjqnp\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:47 crc kubenswrapper[4991]: I0929 12:38:47.916684 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:48 crc kubenswrapper[4991]: I0929 12:38:48.288621 4991 generic.go:334] "Generic (PLEG): container finished" podID="3a58982f-27d0-41c0-a7e7-ac3b855ff971" containerID="81bbdf51f2b3679f1e95bfafe7d6ec79c8cc429719892fa2eae04ac4d3346fb8" exitCode=0 Sep 29 12:38:48 crc kubenswrapper[4991]: I0929 12:38:48.288697 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" event={"ID":"3a58982f-27d0-41c0-a7e7-ac3b855ff971","Type":"ContainerDied","Data":"81bbdf51f2b3679f1e95bfafe7d6ec79c8cc429719892fa2eae04ac4d3346fb8"} Sep 29 12:38:48 crc kubenswrapper[4991]: I0929 12:38:48.289009 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" event={"ID":"3a58982f-27d0-41c0-a7e7-ac3b855ff971","Type":"ContainerStarted","Data":"44e55548aef13d3b5ad06d3f0cbebad8c4e3cee4583552338b7d214bd8461b74"} Sep 29 12:38:48 crc kubenswrapper[4991]: I0929 12:38:48.330741 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-cjqnp"] Sep 29 12:38:48 crc kubenswrapper[4991]: I0929 12:38:48.341973 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5br8n/crc-debug-cjqnp"] Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.080815 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.176538 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host\") pod \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.176642 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host" (OuterVolumeSpecName: "host") pod "3a58982f-27d0-41c0-a7e7-ac3b855ff971" (UID: "3a58982f-27d0-41c0-a7e7-ac3b855ff971"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.176879 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5hv4\" (UniqueName: \"kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4\") pod \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\" (UID: \"3a58982f-27d0-41c0-a7e7-ac3b855ff971\") " Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.177798 4991 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a58982f-27d0-41c0-a7e7-ac3b855ff971-host\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.185142 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4" (OuterVolumeSpecName: "kube-api-access-z5hv4") pod "3a58982f-27d0-41c0-a7e7-ac3b855ff971" (UID: "3a58982f-27d0-41c0-a7e7-ac3b855ff971"). InnerVolumeSpecName "kube-api-access-z5hv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.280408 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5hv4\" (UniqueName: \"kubernetes.io/projected/3a58982f-27d0-41c0-a7e7-ac3b855ff971-kube-api-access-z5hv4\") on node \"crc\" DevicePath \"\"" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.313647 4991 scope.go:117] "RemoveContainer" containerID="81bbdf51f2b3679f1e95bfafe7d6ec79c8cc429719892fa2eae04ac4d3346fb8" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.313798 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/crc-debug-cjqnp" Sep 29 12:38:50 crc kubenswrapper[4991]: I0929 12:38:50.941285 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a58982f-27d0-41c0-a7e7-ac3b855ff971" path="/var/lib/kubelet/pods/3a58982f-27d0-41c0-a7e7-ac3b855ff971/volumes" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.080997 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/util/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.316815 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/util/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.400913 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/pull/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.555697 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/pull/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.763176 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/util/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.809672 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/extract/0.log" Sep 29 12:38:51 crc kubenswrapper[4991]: I0929 12:38:51.818686 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6ffdd59b3bf1bb4a85b794e3f63a99a17badc2afeec929ab245b042e37bgbrb_164835c5-ecea-452d-9652-f252751a6b25/pull/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.019429 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6495d75b5-dpcb9_f0b63357-310d-4b63-9ba9-212c0f3c6dd4/kube-rbac-proxy/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.079173 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6495d75b5-dpcb9_f0b63357-310d-4b63-9ba9-212c0f3c6dd4/manager/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.135475 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748c574d75-bjlxb_64c94e01-ecee-47b4-ab4b-182085a9dce5/kube-rbac-proxy/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.298469 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748c574d75-bjlxb_64c94e01-ecee-47b4-ab4b-182085a9dce5/manager/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.421924 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d74f4d695-c74zk_861ab488-585c-407a-bfe6-97e8d01f20e6/kube-rbac-proxy/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.422937 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d74f4d695-c74zk_861ab488-585c-407a-bfe6-97e8d01f20e6/manager/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.596691 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67b5d44b7f-8969b_55a2358d-e838-4581-906b-ec7f1a3117bf/kube-rbac-proxy/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.743460 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67b5d44b7f-8969b_55a2358d-e838-4581-906b-ec7f1a3117bf/manager/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.759897 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8ff95898-2svcm_663f9d51-4014-4b2d-b44d-96dca010a1f4/kube-rbac-proxy/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.953778 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8ff95898-2svcm_663f9d51-4014-4b2d-b44d-96dca010a1f4/manager/0.log" Sep 29 12:38:52 crc kubenswrapper[4991]: I0929 12:38:52.971090 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-695847bc78-v4kx7_9cdd1ad5-293a-456e-8313-e23a3140f8f5/kube-rbac-proxy/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.018272 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-695847bc78-v4kx7_9cdd1ad5-293a-456e-8313-e23a3140f8f5/manager/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.244957 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858cd69f49-lktsl_248ebc15-872d-49f9-82e3-25814d7cc483/kube-rbac-proxy/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.382725 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-858cd69f49-lktsl_248ebc15-872d-49f9-82e3-25814d7cc483/manager/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.418384 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9fc8d5567-6kksz_2e6f46c0-8a81-4203-946a-9cc5d0217b02/kube-rbac-proxy/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.461759 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9fc8d5567-6kksz_2e6f46c0-8a81-4203-946a-9cc5d0217b02/manager/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.647790 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7bf498966c-bjf5h_76b136ef-8bd7-4695-807d-be77c22c87bd/kube-rbac-proxy/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.670251 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7bf498966c-bjf5h_76b136ef-8bd7-4695-807d-be77c22c87bd/manager/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.850072 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-56cf9c6b99-vvlp2_ffa64e52-6eef-46e8-a535-d9797274440b/kube-rbac-proxy/0.log" Sep 29 12:38:53 crc kubenswrapper[4991]: I0929 12:38:53.928229 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-56cf9c6b99-vvlp2_ffa64e52-6eef-46e8-a535-d9797274440b/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.092035 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-687b9cf756-tldms_269a0f66-05b9-4bda-9125-e13b1a9264dc/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.102696 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-687b9cf756-tldms_269a0f66-05b9-4bda-9125-e13b1a9264dc/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.152198 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54d766c9f9-57tlx_9a454961-e67f-46ce-a5f0-f5bb3cff6b67/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.248235 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54d766c9f9-57tlx_9a454961-e67f-46ce-a5f0-f5bb3cff6b67/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.307771 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-gxh2d_9bac430d-9a3c-42e4-8aba-d175875a29ac/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.480114 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-gxh2d_9bac430d-9a3c-42e4-8aba-d175875a29ac/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.523612 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-2bddq_43697432-9291-473b-add7-576d1db29307/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.546137 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-2bddq_43697432-9291-473b-add7-576d1db29307/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.704025 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-s7cf4_5406253f-041d-4ec4-a710-809c3d267e52/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.715178 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-s7cf4_5406253f-041d-4ec4-a710-809c3d267e52/manager/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.761755 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-98b5bb4f8-zkjlm_1a937ee6-10b5-419c-8bab-ca067ab45efa/kube-rbac-proxy/0.log" Sep 29 12:38:54 crc kubenswrapper[4991]: I0929 12:38:54.973281 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-78bbdfbc57-k9wlt_28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d/kube-rbac-proxy/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.209594 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9vwmv_509dcced-c7e6-454f-a198-aa95e335527b/registry-server/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.276428 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-78bbdfbc57-k9wlt_28ffec7f-9b5d-45ce-9ef4-8a45a9dd0b5d/operator/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.332251 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5f95c46c78-54kjl_40de8749-3b71-4a0d-8483-ef2512644475/kube-rbac-proxy/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.504888 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5f95c46c78-54kjl_40de8749-3b71-4a0d-8483-ef2512644475/manager/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.550639 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-774b97b48-bxq6l_b33b3e90-d2bc-4c24-bc6e-8398c15f597d/kube-rbac-proxy/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.684382 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-774b97b48-bxq6l_b33b3e90-d2bc-4c24-bc6e-8398c15f597d/manager/0.log" Sep 29 12:38:55 crc kubenswrapper[4991]: I0929 12:38:55.895570 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-gsgkm_794146c2-be69-4a63-a485-7435c27e9f14/operator/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.080135 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-k2q5h_36d942bf-7f08-474b-9386-41b1a4d32e01/kube-rbac-proxy/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.241666 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-k2q5h_36d942bf-7f08-474b-9386-41b1a4d32e01/manager/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.328729 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-98b5bb4f8-zkjlm_1a937ee6-10b5-419c-8bab-ca067ab45efa/manager/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.381082 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-889c789f9-9lbgt_927b3fac-f2e5-4009-9991-14615d5d0cc7/kube-rbac-proxy/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.583164 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-f94sw_51c758a6-817d-41d2-958b-2945bda7e082/kube-rbac-proxy/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.758594 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-889c789f9-9lbgt_927b3fac-f2e5-4009-9991-14615d5d0cc7/manager/0.log" Sep 29 12:38:56 crc kubenswrapper[4991]: I0929 12:38:56.859231 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-f94sw_51c758a6-817d-41d2-958b-2945bda7e082/manager/0.log" Sep 29 12:38:57 crc kubenswrapper[4991]: I0929 12:38:57.037995 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-chqdw_26d1e560-b3f8-43ed-af68-4834bf60e6e3/kube-rbac-proxy/0.log" Sep 29 12:38:57 crc kubenswrapper[4991]: I0929 12:38:57.072237 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-chqdw_26d1e560-b3f8-43ed-af68-4834bf60e6e3/manager/0.log" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.116326 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:38:58 crc kubenswrapper[4991]: E0929 12:38:58.117144 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a58982f-27d0-41c0-a7e7-ac3b855ff971" containerName="container-00" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.117161 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a58982f-27d0-41c0-a7e7-ac3b855ff971" containerName="container-00" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.117401 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a58982f-27d0-41c0-a7e7-ac3b855ff971" containerName="container-00" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.120061 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.194847 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.315758 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.316090 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.316263 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.418342 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.418611 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.418763 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.419162 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.419250 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.451910 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57\") pod \"certified-operators-kdrdb\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:38:58 crc kubenswrapper[4991]: I0929 12:38:58.749270 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:00 crc kubenswrapper[4991]: I0929 12:39:00.039165 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:39:01 crc kubenswrapper[4991]: I0929 12:39:01.454914 4991 generic.go:334] "Generic (PLEG): container finished" podID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerID="c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7" exitCode=0 Sep 29 12:39:01 crc kubenswrapper[4991]: I0929 12:39:01.455119 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerDied","Data":"c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7"} Sep 29 12:39:01 crc kubenswrapper[4991]: I0929 12:39:01.455424 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerStarted","Data":"a5874355afd63a70ab3d9f0a9776e791325ffffa91745d9fc265920b2d2f13ed"} Sep 29 12:39:02 crc kubenswrapper[4991]: I0929 12:39:02.471830 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerStarted","Data":"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad"} Sep 29 12:39:04 crc kubenswrapper[4991]: I0929 12:39:04.493187 4991 generic.go:334] "Generic (PLEG): container finished" podID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerID="e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad" exitCode=0 Sep 29 12:39:04 crc kubenswrapper[4991]: I0929 12:39:04.493284 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerDied","Data":"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad"} Sep 29 12:39:06 crc kubenswrapper[4991]: I0929 12:39:06.518333 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerStarted","Data":"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241"} Sep 29 12:39:06 crc kubenswrapper[4991]: I0929 12:39:06.552088 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdrdb" podStartSLOduration=5.061939521 podStartE2EDuration="8.552065101s" podCreationTimestamp="2025-09-29 12:38:58 +0000 UTC" firstStartedPulling="2025-09-29 12:39:01.458329155 +0000 UTC m=+10877.314257183" lastFinishedPulling="2025-09-29 12:39:04.948454735 +0000 UTC m=+10880.804382763" observedRunningTime="2025-09-29 12:39:06.551809285 +0000 UTC m=+10882.407737323" watchObservedRunningTime="2025-09-29 12:39:06.552065101 +0000 UTC m=+10882.407993129" Sep 29 12:39:08 crc kubenswrapper[4991]: I0929 12:39:08.751896 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:08 crc kubenswrapper[4991]: I0929 12:39:08.752501 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:08 crc kubenswrapper[4991]: I0929 12:39:08.822266 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.372662 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.377209 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.390894 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.508108 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.508158 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.508418 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stc7l\" (UniqueName: \"kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.612434 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.612497 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.612574 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stc7l\" (UniqueName: \"kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.613153 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.613153 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.637790 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stc7l\" (UniqueName: \"kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l\") pod \"redhat-operators-pg6w5\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:15 crc kubenswrapper[4991]: I0929 12:39:15.707999 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:16 crc kubenswrapper[4991]: I0929 12:39:16.407522 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:39:16 crc kubenswrapper[4991]: W0929 12:39:16.411880 4991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf73d187b_acdb_42b6_bff8_f2a9d84e23dd.slice/crio-9b491d256e81d9319788f286abeae8acdf0b2ccf410275a07512e6d752a50f17 WatchSource:0}: Error finding container 9b491d256e81d9319788f286abeae8acdf0b2ccf410275a07512e6d752a50f17: Status 404 returned error can't find the container with id 9b491d256e81d9319788f286abeae8acdf0b2ccf410275a07512e6d752a50f17 Sep 29 12:39:16 crc kubenswrapper[4991]: I0929 12:39:16.630963 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerStarted","Data":"9b491d256e81d9319788f286abeae8acdf0b2ccf410275a07512e6d752a50f17"} Sep 29 12:39:17 crc kubenswrapper[4991]: I0929 12:39:17.644288 4991 generic.go:334] "Generic (PLEG): container finished" podID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerID="494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102" exitCode=0 Sep 29 12:39:17 crc kubenswrapper[4991]: I0929 12:39:17.644368 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerDied","Data":"494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102"} Sep 29 12:39:18 crc kubenswrapper[4991]: I0929 12:39:18.816346 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:19 crc kubenswrapper[4991]: I0929 12:39:19.503146 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ftbbj_4462bce4-6cf5-4d65-890e-4d1c8b0523fa/control-plane-machine-set-operator/0.log" Sep 29 12:39:19 crc kubenswrapper[4991]: I0929 12:39:19.679311 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerStarted","Data":"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e"} Sep 29 12:39:19 crc kubenswrapper[4991]: I0929 12:39:19.788592 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:39:19 crc kubenswrapper[4991]: I0929 12:39:19.789191 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdrdb" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="registry-server" containerID="cri-o://b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241" gracePeriod=2 Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.296332 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r4x2n_34f20ab2-1d2b-4a17-923b-ad9151c86dcf/kube-rbac-proxy/0.log" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.358790 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r4x2n_34f20ab2-1d2b-4a17-923b-ad9151c86dcf/machine-api-operator/0.log" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.554589 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.640114 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities\") pod \"c4a585eb-bab5-41ea-a622-67e39cccdf33\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.640812 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content\") pod \"c4a585eb-bab5-41ea-a622-67e39cccdf33\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.640964 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57\") pod \"c4a585eb-bab5-41ea-a622-67e39cccdf33\" (UID: \"c4a585eb-bab5-41ea-a622-67e39cccdf33\") " Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.641234 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities" (OuterVolumeSpecName: "utilities") pod "c4a585eb-bab5-41ea-a622-67e39cccdf33" (UID: "c4a585eb-bab5-41ea-a622-67e39cccdf33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.641850 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.670075 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57" (OuterVolumeSpecName: "kube-api-access-jhz57") pod "c4a585eb-bab5-41ea-a622-67e39cccdf33" (UID: "c4a585eb-bab5-41ea-a622-67e39cccdf33"). InnerVolumeSpecName "kube-api-access-jhz57". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.717837 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4a585eb-bab5-41ea-a622-67e39cccdf33" (UID: "c4a585eb-bab5-41ea-a622-67e39cccdf33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.732889 4991 generic.go:334] "Generic (PLEG): container finished" podID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerID="b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241" exitCode=0 Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.733672 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdrdb" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.733677 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerDied","Data":"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241"} Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.733746 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdrdb" event={"ID":"c4a585eb-bab5-41ea-a622-67e39cccdf33","Type":"ContainerDied","Data":"a5874355afd63a70ab3d9f0a9776e791325ffffa91745d9fc265920b2d2f13ed"} Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.733765 4991 scope.go:117] "RemoveContainer" containerID="b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.744397 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/c4a585eb-bab5-41ea-a622-67e39cccdf33-kube-api-access-jhz57\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.744434 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a585eb-bab5-41ea-a622-67e39cccdf33-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.774918 4991 scope.go:117] "RemoveContainer" containerID="e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.775183 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.786408 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdrdb"] Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.814869 4991 scope.go:117] "RemoveContainer" containerID="c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.875357 4991 scope.go:117] "RemoveContainer" containerID="b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241" Sep 29 12:39:20 crc kubenswrapper[4991]: E0929 12:39:20.876242 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241\": container with ID starting with b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241 not found: ID does not exist" containerID="b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.876288 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241"} err="failed to get container status \"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241\": rpc error: code = NotFound desc = could not find container \"b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241\": container with ID starting with b70b5c470f90147bd17f1696346a023855b64ff56475bfc0d11c67e3945bf241 not found: ID does not exist" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.876316 4991 scope.go:117] "RemoveContainer" containerID="e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad" Sep 29 12:39:20 crc kubenswrapper[4991]: E0929 12:39:20.876719 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad\": container with ID starting with e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad not found: ID does not exist" containerID="e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.876772 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad"} err="failed to get container status \"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad\": rpc error: code = NotFound desc = could not find container \"e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad\": container with ID starting with e09423a5c7066f1704a408ad378bd22f73ab3bbc22c86c89eeb9fd071b4825ad not found: ID does not exist" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.876830 4991 scope.go:117] "RemoveContainer" containerID="c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7" Sep 29 12:39:20 crc kubenswrapper[4991]: E0929 12:39:20.877505 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7\": container with ID starting with c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7 not found: ID does not exist" containerID="c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.877540 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7"} err="failed to get container status \"c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7\": rpc error: code = NotFound desc = could not find container \"c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7\": container with ID starting with c16b803086b2185c8139530f28e6808f4d4dc55ce7790a744934ff9635d27fd7 not found: ID does not exist" Sep 29 12:39:20 crc kubenswrapper[4991]: I0929 12:39:20.942641 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" path="/var/lib/kubelet/pods/c4a585eb-bab5-41ea-a622-67e39cccdf33/volumes" Sep 29 12:39:24 crc kubenswrapper[4991]: I0929 12:39:24.784032 4991 generic.go:334] "Generic (PLEG): container finished" podID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerID="474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e" exitCode=0 Sep 29 12:39:24 crc kubenswrapper[4991]: I0929 12:39:24.784093 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerDied","Data":"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e"} Sep 29 12:39:25 crc kubenswrapper[4991]: I0929 12:39:25.801422 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerStarted","Data":"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8"} Sep 29 12:39:25 crc kubenswrapper[4991]: I0929 12:39:25.836707 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pg6w5" podStartSLOduration=3.22820426 podStartE2EDuration="10.836684516s" podCreationTimestamp="2025-09-29 12:39:15 +0000 UTC" firstStartedPulling="2025-09-29 12:39:17.646878509 +0000 UTC m=+10893.502806537" lastFinishedPulling="2025-09-29 12:39:25.255358765 +0000 UTC m=+10901.111286793" observedRunningTime="2025-09-29 12:39:25.823468331 +0000 UTC m=+10901.679396369" watchObservedRunningTime="2025-09-29 12:39:25.836684516 +0000 UTC m=+10901.692612544" Sep 29 12:39:35 crc kubenswrapper[4991]: I0929 12:39:35.708314 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:35 crc kubenswrapper[4991]: I0929 12:39:35.708892 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:39:35 crc kubenswrapper[4991]: I0929 12:39:35.983463 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-jkfxh_851cddb0-2bb3-4344-aaf9-e94e095a72a5/cert-manager-cainjector/0.log" Sep 29 12:39:35 crc kubenswrapper[4991]: I0929 12:39:35.998501 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-th64x_cf2e698b-49b8-438e-aa92-318a0605f78f/cert-manager-controller/0.log" Sep 29 12:39:36 crc kubenswrapper[4991]: I0929 12:39:36.198653 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-stx6r_b075dfdf-7089-442c-99eb-c93b2d5f3d6c/cert-manager-webhook/0.log" Sep 29 12:39:36 crc kubenswrapper[4991]: I0929 12:39:36.765689 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pg6w5" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" probeResult="failure" output=< Sep 29 12:39:36 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 12:39:36 crc kubenswrapper[4991]: > Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.325040 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:42 crc kubenswrapper[4991]: E0929 12:39:42.326086 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="extract-content" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.326100 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="extract-content" Sep 29 12:39:42 crc kubenswrapper[4991]: E0929 12:39:42.326114 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="extract-utilities" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.326120 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="extract-utilities" Sep 29 12:39:42 crc kubenswrapper[4991]: E0929 12:39:42.326152 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="registry-server" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.326158 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="registry-server" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.326400 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a585eb-bab5-41ea-a622-67e39cccdf33" containerName="registry-server" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.328587 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.346534 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.427590 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.427672 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsprc\" (UniqueName: \"kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.428033 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.530861 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsprc\" (UniqueName: \"kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.531321 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.531622 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.531924 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.532106 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.558647 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsprc\" (UniqueName: \"kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc\") pod \"redhat-marketplace-qdwp9\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:42 crc kubenswrapper[4991]: I0929 12:39:42.659708 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:43 crc kubenswrapper[4991]: I0929 12:39:43.239876 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:44 crc kubenswrapper[4991]: I0929 12:39:44.008274 4991 generic.go:334] "Generic (PLEG): container finished" podID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerID="deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746" exitCode=0 Sep 29 12:39:44 crc kubenswrapper[4991]: I0929 12:39:44.008461 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerDied","Data":"deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746"} Sep 29 12:39:44 crc kubenswrapper[4991]: I0929 12:39:44.008526 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerStarted","Data":"875ba90bab258c6dcb9fd946fee19f0796f8bbbb004d3ec5b499b137d84dd9db"} Sep 29 12:39:45 crc kubenswrapper[4991]: I0929 12:39:45.022497 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerStarted","Data":"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d"} Sep 29 12:39:46 crc kubenswrapper[4991]: I0929 12:39:46.034807 4991 generic.go:334] "Generic (PLEG): container finished" podID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerID="6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d" exitCode=0 Sep 29 12:39:46 crc kubenswrapper[4991]: I0929 12:39:46.034904 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerDied","Data":"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d"} Sep 29 12:39:46 crc kubenswrapper[4991]: I0929 12:39:46.756565 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pg6w5" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" probeResult="failure" output=< Sep 29 12:39:46 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 12:39:46 crc kubenswrapper[4991]: > Sep 29 12:39:47 crc kubenswrapper[4991]: I0929 12:39:47.046708 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerStarted","Data":"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6"} Sep 29 12:39:47 crc kubenswrapper[4991]: I0929 12:39:47.066454 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdwp9" podStartSLOduration=2.595380214 podStartE2EDuration="5.066435305s" podCreationTimestamp="2025-09-29 12:39:42 +0000 UTC" firstStartedPulling="2025-09-29 12:39:44.013362694 +0000 UTC m=+10919.869290722" lastFinishedPulling="2025-09-29 12:39:46.484417785 +0000 UTC m=+10922.340345813" observedRunningTime="2025-09-29 12:39:47.062160444 +0000 UTC m=+10922.918088492" watchObservedRunningTime="2025-09-29 12:39:47.066435305 +0000 UTC m=+10922.922363333" Sep 29 12:39:49 crc kubenswrapper[4991]: I0929 12:39:49.905894 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-j2v2f_0c4e65d5-1c12-4fdc-942b-7408fe9f1f74/nmstate-console-plugin/0.log" Sep 29 12:39:50 crc kubenswrapper[4991]: I0929 12:39:50.094011 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-27kdc_1084227f-9c42-4e89-aa18-bd2b4fcd9c93/nmstate-handler/0.log" Sep 29 12:39:50 crc kubenswrapper[4991]: I0929 12:39:50.155105 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-f4kf5_5128d12b-b704-4b3b-b9db-06abfa2dc3cc/kube-rbac-proxy/0.log" Sep 29 12:39:50 crc kubenswrapper[4991]: I0929 12:39:50.195990 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-f4kf5_5128d12b-b704-4b3b-b9db-06abfa2dc3cc/nmstate-metrics/0.log" Sep 29 12:39:50 crc kubenswrapper[4991]: I0929 12:39:50.415716 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-xmqmk_7f50af83-028a-45af-9b59-31b333747526/nmstate-operator/0.log" Sep 29 12:39:50 crc kubenswrapper[4991]: I0929 12:39:50.513437 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-r8qrg_5956fec2-ffc3-4fba-8ba0-bfea0bc31abd/nmstate-webhook/0.log" Sep 29 12:39:52 crc kubenswrapper[4991]: I0929 12:39:52.660146 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:52 crc kubenswrapper[4991]: I0929 12:39:52.660707 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:52 crc kubenswrapper[4991]: I0929 12:39:52.714836 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:53 crc kubenswrapper[4991]: I0929 12:39:53.184907 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:53 crc kubenswrapper[4991]: I0929 12:39:53.244202 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.137503 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdwp9" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="registry-server" containerID="cri-o://fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6" gracePeriod=2 Sep 29 12:39:55 crc kubenswrapper[4991]: E0929 12:39:55.396798 4991 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4cab51_c4b3_44a5_a322_80f8d06b4dee.slice/crio-fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b4cab51_c4b3_44a5_a322_80f8d06b4dee.slice/crio-conmon-fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6.scope\": RecentStats: unable to find data in memory cache]" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.719239 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.773593 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content\") pod \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.773830 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities\") pod \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.773867 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsprc\" (UniqueName: \"kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc\") pod \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\" (UID: \"5b4cab51-c4b3-44a5-a322-80f8d06b4dee\") " Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.774775 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities" (OuterVolumeSpecName: "utilities") pod "5b4cab51-c4b3-44a5-a322-80f8d06b4dee" (UID: "5b4cab51-c4b3-44a5-a322-80f8d06b4dee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.786862 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc" (OuterVolumeSpecName: "kube-api-access-gsprc") pod "5b4cab51-c4b3-44a5-a322-80f8d06b4dee" (UID: "5b4cab51-c4b3-44a5-a322-80f8d06b4dee"). InnerVolumeSpecName "kube-api-access-gsprc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.790890 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b4cab51-c4b3-44a5-a322-80f8d06b4dee" (UID: "5b4cab51-c4b3-44a5-a322-80f8d06b4dee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.887437 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.887494 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsprc\" (UniqueName: \"kubernetes.io/projected/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-kube-api-access-gsprc\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:55 crc kubenswrapper[4991]: I0929 12:39:55.887509 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4cab51-c4b3-44a5-a322-80f8d06b4dee-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.160187 4991 generic.go:334] "Generic (PLEG): container finished" podID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerID="fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6" exitCode=0 Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.160239 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerDied","Data":"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6"} Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.160296 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdwp9" event={"ID":"5b4cab51-c4b3-44a5-a322-80f8d06b4dee","Type":"ContainerDied","Data":"875ba90bab258c6dcb9fd946fee19f0796f8bbbb004d3ec5b499b137d84dd9db"} Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.160318 4991 scope.go:117] "RemoveContainer" containerID="fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.160267 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdwp9" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.200372 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.202318 4991 scope.go:117] "RemoveContainer" containerID="6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.213063 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdwp9"] Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.227354 4991 scope.go:117] "RemoveContainer" containerID="deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.291223 4991 scope.go:117] "RemoveContainer" containerID="fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6" Sep 29 12:39:56 crc kubenswrapper[4991]: E0929 12:39:56.292253 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6\": container with ID starting with fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6 not found: ID does not exist" containerID="fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.292291 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6"} err="failed to get container status \"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6\": rpc error: code = NotFound desc = could not find container \"fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6\": container with ID starting with fcec4356da51a98b5bb0ed8e9e4ded0cb8340f6c2aef8121357e4c3394d6a2e6 not found: ID does not exist" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.292318 4991 scope.go:117] "RemoveContainer" containerID="6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d" Sep 29 12:39:56 crc kubenswrapper[4991]: E0929 12:39:56.292674 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d\": container with ID starting with 6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d not found: ID does not exist" containerID="6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.292699 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d"} err="failed to get container status \"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d\": rpc error: code = NotFound desc = could not find container \"6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d\": container with ID starting with 6c64377451f2b9f0b3f0c433f8561baddc2580686b76aab08776a7d78780888d not found: ID does not exist" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.292719 4991 scope.go:117] "RemoveContainer" containerID="deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746" Sep 29 12:39:56 crc kubenswrapper[4991]: E0929 12:39:56.292942 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746\": container with ID starting with deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746 not found: ID does not exist" containerID="deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.292986 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746"} err="failed to get container status \"deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746\": rpc error: code = NotFound desc = could not find container \"deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746\": container with ID starting with deda7ae0200312bea955c75700ea9d37c0565e9e743521b9ce8854ab0c8a1746 not found: ID does not exist" Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.786731 4991 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pg6w5" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" probeResult="failure" output=< Sep 29 12:39:56 crc kubenswrapper[4991]: timeout: failed to connect service ":50051" within 1s Sep 29 12:39:56 crc kubenswrapper[4991]: > Sep 29 12:39:56 crc kubenswrapper[4991]: I0929 12:39:56.939902 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" path="/var/lib/kubelet/pods/5b4cab51-c4b3-44a5-a322-80f8d06b4dee/volumes" Sep 29 12:40:03 crc kubenswrapper[4991]: I0929 12:40:03.281116 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54cf9db6d-sdkfj_d8dd3043-cf98-4a8f-b175-ff3b2ad25381/kube-rbac-proxy/0.log" Sep 29 12:40:03 crc kubenswrapper[4991]: I0929 12:40:03.391705 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54cf9db6d-sdkfj_d8dd3043-cf98-4a8f-b175-ff3b2ad25381/manager/0.log" Sep 29 12:40:05 crc kubenswrapper[4991]: I0929 12:40:05.765990 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:40:05 crc kubenswrapper[4991]: I0929 12:40:05.824704 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:40:06 crc kubenswrapper[4991]: I0929 12:40:06.013847 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:40:07 crc kubenswrapper[4991]: I0929 12:40:07.303739 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pg6w5" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" containerID="cri-o://cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8" gracePeriod=2 Sep 29 12:40:07 crc kubenswrapper[4991]: I0929 12:40:07.942075 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.006754 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content\") pod \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.007156 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities\") pod \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.007500 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stc7l\" (UniqueName: \"kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l\") pod \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\" (UID: \"f73d187b-acdb-42b6-bff8-f2a9d84e23dd\") " Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.009324 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities" (OuterVolumeSpecName: "utilities") pod "f73d187b-acdb-42b6-bff8-f2a9d84e23dd" (UID: "f73d187b-acdb-42b6-bff8-f2a9d84e23dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.035036 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l" (OuterVolumeSpecName: "kube-api-access-stc7l") pod "f73d187b-acdb-42b6-bff8-f2a9d84e23dd" (UID: "f73d187b-acdb-42b6-bff8-f2a9d84e23dd"). InnerVolumeSpecName "kube-api-access-stc7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.102047 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f73d187b-acdb-42b6-bff8-f2a9d84e23dd" (UID: "f73d187b-acdb-42b6-bff8-f2a9d84e23dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.110504 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.110538 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stc7l\" (UniqueName: \"kubernetes.io/projected/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-kube-api-access-stc7l\") on node \"crc\" DevicePath \"\"" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.110550 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d187b-acdb-42b6-bff8-f2a9d84e23dd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.317197 4991 generic.go:334] "Generic (PLEG): container finished" podID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerID="cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8" exitCode=0 Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.317294 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerDied","Data":"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8"} Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.317526 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg6w5" event={"ID":"f73d187b-acdb-42b6-bff8-f2a9d84e23dd","Type":"ContainerDied","Data":"9b491d256e81d9319788f286abeae8acdf0b2ccf410275a07512e6d752a50f17"} Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.317362 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg6w5" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.317551 4991 scope.go:117] "RemoveContainer" containerID="cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.355369 4991 scope.go:117] "RemoveContainer" containerID="474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.357178 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.368924 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pg6w5"] Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.388693 4991 scope.go:117] "RemoveContainer" containerID="494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.459917 4991 scope.go:117] "RemoveContainer" containerID="cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8" Sep 29 12:40:08 crc kubenswrapper[4991]: E0929 12:40:08.460512 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8\": container with ID starting with cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8 not found: ID does not exist" containerID="cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.460579 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8"} err="failed to get container status \"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8\": rpc error: code = NotFound desc = could not find container \"cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8\": container with ID starting with cadea8f17bd78cc3d1462a73d647f36690c1b149bb3d7826fef4f45d432a6be8 not found: ID does not exist" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.460618 4991 scope.go:117] "RemoveContainer" containerID="474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e" Sep 29 12:40:08 crc kubenswrapper[4991]: E0929 12:40:08.461027 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e\": container with ID starting with 474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e not found: ID does not exist" containerID="474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.461080 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e"} err="failed to get container status \"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e\": rpc error: code = NotFound desc = could not find container \"474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e\": container with ID starting with 474a2834c63001be5d0ca8df346e468433e4db8610874cb6c3d36318760fee4e not found: ID does not exist" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.461116 4991 scope.go:117] "RemoveContainer" containerID="494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102" Sep 29 12:40:08 crc kubenswrapper[4991]: E0929 12:40:08.461503 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102\": container with ID starting with 494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102 not found: ID does not exist" containerID="494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.461584 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102"} err="failed to get container status \"494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102\": rpc error: code = NotFound desc = could not find container \"494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102\": container with ID starting with 494fcfa82d90421aa534a56a02a768d6310dd6f0601c5577b1580e7340065102 not found: ID does not exist" Sep 29 12:40:08 crc kubenswrapper[4991]: I0929 12:40:08.939781 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" path="/var/lib/kubelet/pods/f73d187b-acdb-42b6-bff8-f2a9d84e23dd/volumes" Sep 29 12:40:20 crc kubenswrapper[4991]: I0929 12:40:20.911169 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-fcc886d58-s6dct_53f2a78a-64f1-4029-9994-dbfef9f66476/cluster-logging-operator/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.363289 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-n9pvm_ff5dec81-79d1-4e4d-8fe3-77362d3c2e73/collector/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.514610 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_cdccd5ff-76dc-4bd3-97a7-5855c701d49e/loki-compactor/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.646650 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-67c9b4c785-f87nt_c7166108-60e3-494c-8800-aab366098632/loki-distributor/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.686178 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-78756f9d5f-2vl5n_1c21cca2-5bc1-4174-a45c-f928fa2c2588/gateway/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.808837 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-78756f9d5f-2vl5n_1c21cca2-5bc1-4174-a45c-f928fa2c2588/opa/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.949993 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-78756f9d5f-b9nwx_6d85d2ab-609c-4e04-a608-b247d5d61ddc/gateway/0.log" Sep 29 12:40:21 crc kubenswrapper[4991]: I0929 12:40:21.971771 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-78756f9d5f-b9nwx_6d85d2ab-609c-4e04-a608-b247d5d61ddc/opa/0.log" Sep 29 12:40:22 crc kubenswrapper[4991]: I0929 12:40:22.130857 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_4d978c8f-2c3b-4824-8c63-8f5eb56c92f8/loki-index-gateway/0.log" Sep 29 12:40:22 crc kubenswrapper[4991]: I0929 12:40:22.325448 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_6c0b1bf5-f8e1-49e8-bc70-75fbcf826bc8/loki-ingester/0.log" Sep 29 12:40:22 crc kubenswrapper[4991]: I0929 12:40:22.408381 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-7454676c57-qpzss_16402e1f-4ee8-4845-964b-e7d8d153cd1c/loki-querier/0.log" Sep 29 12:40:22 crc kubenswrapper[4991]: I0929 12:40:22.493741 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6b467cdd84-2pdst_122b9557-bde1-41fd-aaed-f9120e9b00c2/loki-query-frontend/0.log" Sep 29 12:40:36 crc kubenswrapper[4991]: I0929 12:40:36.671020 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-dhq9n_595fa3a7-5bff-454b-85d1-458f67728b2c/kube-rbac-proxy/0.log" Sep 29 12:40:36 crc kubenswrapper[4991]: I0929 12:40:36.898899 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-frr-files/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.069444 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-dhq9n_595fa3a7-5bff-454b-85d1-458f67728b2c/controller/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.074115 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-frr-files/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.126425 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-reloader/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.146356 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-metrics/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.263973 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-reloader/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.490337 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-metrics/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.499048 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-metrics/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.514746 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-reloader/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.543492 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-frr-files/0.log" Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.947021 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:40:37 crc kubenswrapper[4991]: I0929 12:40:37.947094 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.124837 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-metrics/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.129501 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-frr-files/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.200684 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/cp-reloader/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.250073 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/controller/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.369151 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/frr-metrics/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.498510 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/kube-rbac-proxy-frr/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.498628 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/kube-rbac-proxy/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.621829 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/reloader/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.819434 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-c82r7_adccb4f9-5bc4-48fb-830a-32c6c889a9a7/frr-k8s-webhook-server/0.log" Sep 29 12:40:38 crc kubenswrapper[4991]: I0929 12:40:38.971138 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d7567497f-s6bfc_6618cd07-587d-4f77-b392-080e4f6e5806/manager/0.log" Sep 29 12:40:39 crc kubenswrapper[4991]: I0929 12:40:39.123737 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79c95c7875-765hk_e09ee4b2-97d8-4e24-a05d-b4f4bcf9713c/webhook-server/0.log" Sep 29 12:40:39 crc kubenswrapper[4991]: I0929 12:40:39.364409 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-srk99_c23ddd1f-4ebb-4f74-bb77-b6882ea82681/kube-rbac-proxy/0.log" Sep 29 12:40:40 crc kubenswrapper[4991]: I0929 12:40:40.167479 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-srk99_c23ddd1f-4ebb-4f74-bb77-b6882ea82681/speaker/0.log" Sep 29 12:40:40 crc kubenswrapper[4991]: I0929 12:40:40.966343 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qb2cb_01dcafc4-bfc8-44ea-af23-a5eedf41c331/frr/0.log" Sep 29 12:40:52 crc kubenswrapper[4991]: I0929 12:40:52.990512 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.213022 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.239194 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.267287 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.413407 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/extract/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.415319 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.427409 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc6l7jv_8840b439-f65d-4c5f-8022-5261a9a9eaeb/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.587270 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.776824 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.779411 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.802648 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.983810 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/util/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.987409 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/pull/0.log" Sep 29 12:40:53 crc kubenswrapper[4991]: I0929 12:40:53.995660 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dwp5m6_f07ef33b-dcdb-4706-ab42-e881185cea81/extract/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.188438 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/util/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.361591 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/util/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.362567 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/pull/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.398366 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/pull/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.536483 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/util/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.587862 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/pull/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.588078 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_c03590272772b1d93899b6ceaa83703cf46dc8f83faf0e965a036060c0v2v6x_7a1fba17-86ce-4573-8199-011457e2c391/extract/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.751390 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-utilities/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.901697 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-content/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.937031 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-utilities/0.log" Sep 29 12:40:54 crc kubenswrapper[4991]: I0929 12:40:54.941060 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-content/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.140510 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-content/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.224931 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/extract-utilities/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.412768 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-utilities/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.622007 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-content/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.638341 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-utilities/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.757591 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-content/0.log" Sep 29 12:40:55 crc kubenswrapper[4991]: I0929 12:40:55.970171 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-utilities/0.log" Sep 29 12:40:56 crc kubenswrapper[4991]: I0929 12:40:56.072840 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/extract-content/0.log" Sep 29 12:40:56 crc kubenswrapper[4991]: I0929 12:40:56.491767 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/util/0.log" Sep 29 12:40:56 crc kubenswrapper[4991]: I0929 12:40:56.729655 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/util/0.log" Sep 29 12:40:56 crc kubenswrapper[4991]: I0929 12:40:56.808397 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/pull/0.log" Sep 29 12:40:56 crc kubenswrapper[4991]: I0929 12:40:56.840043 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/pull/0.log" Sep 29 12:40:57 crc kubenswrapper[4991]: I0929 12:40:57.013707 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/util/0.log" Sep 29 12:40:57 crc kubenswrapper[4991]: I0929 12:40:57.016365 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/pull/0.log" Sep 29 12:40:57 crc kubenswrapper[4991]: I0929 12:40:57.234102 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cvdkf_3f82d94c-b2cf-4519-a629-90b404272dfd/registry-server/0.log" Sep 29 12:40:57 crc kubenswrapper[4991]: I0929 12:40:57.819074 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8d7c1038c65d2785a47a2ffcc15b07abd45421e7db92f3c296d966170vvkgm_67372b02-362b-4463-a437-359886d014af/extract/0.log" Sep 29 12:40:57 crc kubenswrapper[4991]: I0929 12:40:57.909556 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/util/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.129446 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/util/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.161706 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/pull/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.164745 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/pull/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.308254 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p5pz9_ac1afc48-443a-4054-ba92-e87e384d71a0/registry-server/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.393143 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/util/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.411092 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/pull/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.411423 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96t9vck_8faa5f43-6a26-4213-a873-d3e7c0794773/extract/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.506478 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xg5gc_5cfff707-29b3-4314-acfe-62ce8977d662/marketplace-operator/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.616557 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-utilities/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.788210 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-utilities/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.791850 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-content/0.log" Sep 29 12:40:58 crc kubenswrapper[4991]: I0929 12:40:58.818611 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-content/0.log" Sep 29 12:40:59 crc kubenswrapper[4991]: I0929 12:40:59.800154 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-utilities/0.log" Sep 29 12:40:59 crc kubenswrapper[4991]: I0929 12:40:59.814295 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/extract-content/0.log" Sep 29 12:40:59 crc kubenswrapper[4991]: I0929 12:40:59.866823 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-utilities/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.101510 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-utilities/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.162309 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-content/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.188496 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-content/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.224380 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pdnq7_e1c4ed58-0d22-42b9-b2dd-5fc7718049d3/registry-server/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.374532 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-utilities/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.420270 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/extract-content/0.log" Sep 29 12:41:00 crc kubenswrapper[4991]: I0929 12:41:00.644959 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hx68r_e2d9b18c-2bf1-4e66-88b1-b0e82c5650c6/registry-server/0.log" Sep 29 12:41:07 crc kubenswrapper[4991]: I0929 12:41:07.947099 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:41:07 crc kubenswrapper[4991]: I0929 12:41:07.947588 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:41:12 crc kubenswrapper[4991]: I0929 12:41:12.513813 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-bscm9_ed45932c-01e6-4a92-be25-ab5f613eb208/prometheus-operator/0.log" Sep 29 12:41:12 crc kubenswrapper[4991]: I0929 12:41:12.644376 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66bb669b68-8kdtt_38bf7687-8e95-4b5c-b346-381d71189652/prometheus-operator-admission-webhook/0.log" Sep 29 12:41:12 crc kubenswrapper[4991]: I0929 12:41:12.743802 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66bb669b68-hcvsz_f46f29d1-1ea0-498a-9724-466a64601373/prometheus-operator-admission-webhook/0.log" Sep 29 12:41:12 crc kubenswrapper[4991]: I0929 12:41:12.928836 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-l6njq_58e0e494-e32c-4e1a-87b6-0d0dfafed095/operator/0.log" Sep 29 12:41:12 crc kubenswrapper[4991]: I0929 12:41:12.963199 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-6584dc9448-s8m2k_3f0bf97a-8ce8-4c0c-b980-34b76047840f/observability-ui-dashboards/0.log" Sep 29 12:41:13 crc kubenswrapper[4991]: I0929 12:41:13.148927 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-bpxmw_801d6262-1556-4622-bc1e-16dc54de57ca/perses-operator/0.log" Sep 29 12:41:25 crc kubenswrapper[4991]: I0929 12:41:25.377531 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54cf9db6d-sdkfj_d8dd3043-cf98-4a8f-b175-ff3b2ad25381/kube-rbac-proxy/0.log" Sep 29 12:41:25 crc kubenswrapper[4991]: I0929 12:41:25.389245 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-54cf9db6d-sdkfj_d8dd3043-cf98-4a8f-b175-ff3b2ad25381/manager/0.log" Sep 29 12:41:37 crc kubenswrapper[4991]: I0929 12:41:37.946819 4991 patch_prober.go:28] interesting pod/machine-config-daemon-sgbqk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 29 12:41:37 crc kubenswrapper[4991]: I0929 12:41:37.947730 4991 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 29 12:41:37 crc kubenswrapper[4991]: I0929 12:41:37.947827 4991 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" Sep 29 12:41:37 crc kubenswrapper[4991]: I0929 12:41:37.950363 4991 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450"} pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 29 12:41:37 crc kubenswrapper[4991]: I0929 12:41:37.950433 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerName="machine-config-daemon" containerID="cri-o://fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" gracePeriod=600 Sep 29 12:41:38 crc kubenswrapper[4991]: E0929 12:41:38.084977 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:41:38 crc kubenswrapper[4991]: I0929 12:41:38.313101 4991 generic.go:334] "Generic (PLEG): container finished" podID="88ab660a-6f01-4538-946f-38cdadd0b64d" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" exitCode=0 Sep 29 12:41:38 crc kubenswrapper[4991]: I0929 12:41:38.313136 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" event={"ID":"88ab660a-6f01-4538-946f-38cdadd0b64d","Type":"ContainerDied","Data":"fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450"} Sep 29 12:41:38 crc kubenswrapper[4991]: I0929 12:41:38.313193 4991 scope.go:117] "RemoveContainer" containerID="059732d7d1de85caf24435f3569260503e6e0038cc69bd6a8ac8da046b88d1ef" Sep 29 12:41:38 crc kubenswrapper[4991]: I0929 12:41:38.313961 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:41:38 crc kubenswrapper[4991]: E0929 12:41:38.314256 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:41:48 crc kubenswrapper[4991]: E0929 12:41:48.794234 4991 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.184:48222->38.129.56.184:37001: write tcp 38.129.56.184:48222->38.129.56.184:37001: write: broken pipe Sep 29 12:41:49 crc kubenswrapper[4991]: I0929 12:41:49.926810 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:41:49 crc kubenswrapper[4991]: E0929 12:41:49.927424 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:41:54 crc kubenswrapper[4991]: E0929 12:41:54.969714 4991 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.184:48380->38.129.56.184:37001: write tcp 38.129.56.184:48380->38.129.56.184:37001: write: broken pipe Sep 29 12:42:03 crc kubenswrapper[4991]: I0929 12:42:03.926708 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:42:03 crc kubenswrapper[4991]: E0929 12:42:03.927406 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:42:17 crc kubenswrapper[4991]: I0929 12:42:17.927355 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:42:17 crc kubenswrapper[4991]: E0929 12:42:17.928544 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:42:28 crc kubenswrapper[4991]: I0929 12:42:28.823593 4991 scope.go:117] "RemoveContainer" containerID="a1f778170d6821c5fa8908786d64eb7f5edb66af607f74fe2b040d149dc7b5cd" Sep 29 12:42:29 crc kubenswrapper[4991]: I0929 12:42:29.926387 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:42:29 crc kubenswrapper[4991]: E0929 12:42:29.926961 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:42:42 crc kubenswrapper[4991]: I0929 12:42:42.926629 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:42:42 crc kubenswrapper[4991]: E0929 12:42:42.927595 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.723851 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724806 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="extract-utilities" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724819 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="extract-utilities" Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724849 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724857 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724873 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="extract-content" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724879 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="extract-content" Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724895 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724901 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724916 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="extract-utilities" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724921 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="extract-utilities" Sep 29 12:42:50 crc kubenswrapper[4991]: E0929 12:42:50.724940 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="extract-content" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.724993 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="extract-content" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.725204 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4cab51-c4b3-44a5-a322-80f8d06b4dee" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.725229 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73d187b-acdb-42b6-bff8-f2a9d84e23dd" containerName="registry-server" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.726878 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.814163 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zwp\" (UniqueName: \"kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.814243 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.814311 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:50 crc kubenswrapper[4991]: I0929 12:42:50.822486 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.020139 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zwp\" (UniqueName: \"kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.020185 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.020239 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.020726 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.021283 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.074826 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zwp\" (UniqueName: \"kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp\") pod \"community-operators-q6gdl\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.097904 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:42:51 crc kubenswrapper[4991]: I0929 12:42:51.970863 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:42:52 crc kubenswrapper[4991]: I0929 12:42:52.250576 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerStarted","Data":"ec36fcf040dc5a56dcdfec2f68876c935012adc952f1c4ba3f858829236caafe"} Sep 29 12:42:53 crc kubenswrapper[4991]: I0929 12:42:53.260877 4991 generic.go:334] "Generic (PLEG): container finished" podID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerID="d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5" exitCode=0 Sep 29 12:42:53 crc kubenswrapper[4991]: I0929 12:42:53.261157 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerDied","Data":"d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5"} Sep 29 12:42:53 crc kubenswrapper[4991]: I0929 12:42:53.318060 4991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 29 12:42:56 crc kubenswrapper[4991]: I0929 12:42:56.304000 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerStarted","Data":"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3"} Sep 29 12:42:57 crc kubenswrapper[4991]: I0929 12:42:57.927149 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:42:57 crc kubenswrapper[4991]: E0929 12:42:57.928174 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:43:01 crc kubenswrapper[4991]: I0929 12:43:01.362226 4991 generic.go:334] "Generic (PLEG): container finished" podID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerID="1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3" exitCode=0 Sep 29 12:43:01 crc kubenswrapper[4991]: I0929 12:43:01.362389 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerDied","Data":"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3"} Sep 29 12:43:03 crc kubenswrapper[4991]: I0929 12:43:03.392599 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerStarted","Data":"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535"} Sep 29 12:43:03 crc kubenswrapper[4991]: I0929 12:43:03.419645 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6gdl" podStartSLOduration=4.1526721349999995 podStartE2EDuration="13.419624387s" podCreationTimestamp="2025-09-29 12:42:50 +0000 UTC" firstStartedPulling="2025-09-29 12:42:53.263791091 +0000 UTC m=+11109.119719119" lastFinishedPulling="2025-09-29 12:43:02.530743343 +0000 UTC m=+11118.386671371" observedRunningTime="2025-09-29 12:43:03.413729234 +0000 UTC m=+11119.269657262" watchObservedRunningTime="2025-09-29 12:43:03.419624387 +0000 UTC m=+11119.275552425" Sep 29 12:43:11 crc kubenswrapper[4991]: I0929 12:43:11.098994 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:11 crc kubenswrapper[4991]: I0929 12:43:11.099520 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:11 crc kubenswrapper[4991]: I0929 12:43:11.182978 4991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:11 crc kubenswrapper[4991]: I0929 12:43:11.521418 4991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:11 crc kubenswrapper[4991]: I0929 12:43:11.576740 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:43:12 crc kubenswrapper[4991]: I0929 12:43:12.927165 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:43:12 crc kubenswrapper[4991]: E0929 12:43:12.927816 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:43:13 crc kubenswrapper[4991]: I0929 12:43:13.501157 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6gdl" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="registry-server" containerID="cri-o://d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535" gracePeriod=2 Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.060794 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.234972 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4zwp\" (UniqueName: \"kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp\") pod \"e34045b5-6ef5-4370-b9bb-32829ebf182a\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.235131 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities\") pod \"e34045b5-6ef5-4370-b9bb-32829ebf182a\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.235432 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content\") pod \"e34045b5-6ef5-4370-b9bb-32829ebf182a\" (UID: \"e34045b5-6ef5-4370-b9bb-32829ebf182a\") " Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.236514 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities" (OuterVolumeSpecName: "utilities") pod "e34045b5-6ef5-4370-b9bb-32829ebf182a" (UID: "e34045b5-6ef5-4370-b9bb-32829ebf182a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.246594 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp" (OuterVolumeSpecName: "kube-api-access-n4zwp") pod "e34045b5-6ef5-4370-b9bb-32829ebf182a" (UID: "e34045b5-6ef5-4370-b9bb-32829ebf182a"). InnerVolumeSpecName "kube-api-access-n4zwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.301432 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e34045b5-6ef5-4370-b9bb-32829ebf182a" (UID: "e34045b5-6ef5-4370-b9bb-32829ebf182a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.339533 4991 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-utilities\") on node \"crc\" DevicePath \"\"" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.339569 4991 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34045b5-6ef5-4370-b9bb-32829ebf182a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.339581 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4zwp\" (UniqueName: \"kubernetes.io/projected/e34045b5-6ef5-4370-b9bb-32829ebf182a-kube-api-access-n4zwp\") on node \"crc\" DevicePath \"\"" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.522155 4991 generic.go:334] "Generic (PLEG): container finished" podID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerID="d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535" exitCode=0 Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.522208 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerDied","Data":"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535"} Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.522244 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6gdl" event={"ID":"e34045b5-6ef5-4370-b9bb-32829ebf182a","Type":"ContainerDied","Data":"ec36fcf040dc5a56dcdfec2f68876c935012adc952f1c4ba3f858829236caafe"} Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.522270 4991 scope.go:117] "RemoveContainer" containerID="d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535" Sep 29 12:43:14 crc kubenswrapper[4991]: I0929 12:43:14.522471 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6gdl" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.232217 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.234660 4991 scope.go:117] "RemoveContainer" containerID="1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.246918 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6gdl"] Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.258076 4991 scope.go:117] "RemoveContainer" containerID="d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.308183 4991 scope.go:117] "RemoveContainer" containerID="d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535" Sep 29 12:43:15 crc kubenswrapper[4991]: E0929 12:43:15.308769 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535\": container with ID starting with d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535 not found: ID does not exist" containerID="d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.308806 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535"} err="failed to get container status \"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535\": rpc error: code = NotFound desc = could not find container \"d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535\": container with ID starting with d7a48873e452f1b2888136bf7c33220b753424543ba1bf077f7ce638e535d535 not found: ID does not exist" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.308830 4991 scope.go:117] "RemoveContainer" containerID="1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3" Sep 29 12:43:15 crc kubenswrapper[4991]: E0929 12:43:15.309188 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3\": container with ID starting with 1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3 not found: ID does not exist" containerID="1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.309275 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3"} err="failed to get container status \"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3\": rpc error: code = NotFound desc = could not find container \"1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3\": container with ID starting with 1592eddaa107fba57b9186389d283378600d11446253b9688d3ba6b1884cc6d3 not found: ID does not exist" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.309334 4991 scope.go:117] "RemoveContainer" containerID="d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5" Sep 29 12:43:15 crc kubenswrapper[4991]: E0929 12:43:15.309719 4991 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5\": container with ID starting with d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5 not found: ID does not exist" containerID="d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5" Sep 29 12:43:15 crc kubenswrapper[4991]: I0929 12:43:15.309772 4991 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5"} err="failed to get container status \"d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5\": rpc error: code = NotFound desc = could not find container \"d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5\": container with ID starting with d12d36f7f766700aad0ffb2737f822c9cf103eacff6098206d20dce3b9f4ebb5 not found: ID does not exist" Sep 29 12:43:16 crc kubenswrapper[4991]: I0929 12:43:16.941733 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" path="/var/lib/kubelet/pods/e34045b5-6ef5-4370-b9bb-32829ebf182a/volumes" Sep 29 12:43:26 crc kubenswrapper[4991]: I0929 12:43:26.927323 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:43:26 crc kubenswrapper[4991]: E0929 12:43:26.928278 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:43:41 crc kubenswrapper[4991]: I0929 12:43:41.926078 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:43:41 crc kubenswrapper[4991]: E0929 12:43:41.926916 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:43:56 crc kubenswrapper[4991]: I0929 12:43:56.927702 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:43:56 crc kubenswrapper[4991]: E0929 12:43:56.930238 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:44:00 crc kubenswrapper[4991]: I0929 12:44:00.195193 4991 generic.go:334] "Generic (PLEG): container finished" podID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerID="bffb9a0973a1856738fedd03ffb5f8b5dfc5e4019d464da1a779f40762156e7b" exitCode=0 Sep 29 12:44:00 crc kubenswrapper[4991]: I0929 12:44:00.195734 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5br8n/must-gather-dfgzx" event={"ID":"949f778d-df38-426a-9ed6-86f2b6e9976c","Type":"ContainerDied","Data":"bffb9a0973a1856738fedd03ffb5f8b5dfc5e4019d464da1a779f40762156e7b"} Sep 29 12:44:00 crc kubenswrapper[4991]: I0929 12:44:00.196580 4991 scope.go:117] "RemoveContainer" containerID="bffb9a0973a1856738fedd03ffb5f8b5dfc5e4019d464da1a779f40762156e7b" Sep 29 12:44:00 crc kubenswrapper[4991]: I0929 12:44:00.287287 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5br8n_must-gather-dfgzx_949f778d-df38-426a-9ed6-86f2b6e9976c/gather/0.log" Sep 29 12:44:09 crc kubenswrapper[4991]: I0929 12:44:09.603523 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5br8n/must-gather-dfgzx"] Sep 29 12:44:09 crc kubenswrapper[4991]: I0929 12:44:09.604741 4991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5br8n/must-gather-dfgzx" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="copy" containerID="cri-o://ddcbd6060e2250a8fc8e7991d564c49d2f8d0df201ded0777c5c6dbd5857d1a3" gracePeriod=2 Sep 29 12:44:09 crc kubenswrapper[4991]: I0929 12:44:09.617640 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5br8n/must-gather-dfgzx"] Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.355020 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5br8n_must-gather-dfgzx_949f778d-df38-426a-9ed6-86f2b6e9976c/copy/0.log" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.358074 4991 generic.go:334] "Generic (PLEG): container finished" podID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerID="ddcbd6060e2250a8fc8e7991d564c49d2f8d0df201ded0777c5c6dbd5857d1a3" exitCode=143 Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.358125 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21f5ae00e29048c531242d79385060fbabc5660e1e77c6821b99bd7a11ea615" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.406589 4991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5br8n_must-gather-dfgzx_949f778d-df38-426a-9ed6-86f2b6e9976c/copy/0.log" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.407623 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.458702 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2p86\" (UniqueName: \"kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86\") pod \"949f778d-df38-426a-9ed6-86f2b6e9976c\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.458849 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output\") pod \"949f778d-df38-426a-9ed6-86f2b6e9976c\" (UID: \"949f778d-df38-426a-9ed6-86f2b6e9976c\") " Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.468532 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86" (OuterVolumeSpecName: "kube-api-access-d2p86") pod "949f778d-df38-426a-9ed6-86f2b6e9976c" (UID: "949f778d-df38-426a-9ed6-86f2b6e9976c"). InnerVolumeSpecName "kube-api-access-d2p86". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.562680 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2p86\" (UniqueName: \"kubernetes.io/projected/949f778d-df38-426a-9ed6-86f2b6e9976c-kube-api-access-d2p86\") on node \"crc\" DevicePath \"\"" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.697676 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "949f778d-df38-426a-9ed6-86f2b6e9976c" (UID: "949f778d-df38-426a-9ed6-86f2b6e9976c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.768980 4991 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/949f778d-df38-426a-9ed6-86f2b6e9976c-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.927983 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:44:10 crc kubenswrapper[4991]: E0929 12:44:10.928887 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:44:10 crc kubenswrapper[4991]: I0929 12:44:10.963136 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" path="/var/lib/kubelet/pods/949f778d-df38-426a-9ed6-86f2b6e9976c/volumes" Sep 29 12:44:11 crc kubenswrapper[4991]: I0929 12:44:11.370141 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5br8n/must-gather-dfgzx" Sep 29 12:44:23 crc kubenswrapper[4991]: I0929 12:44:23.927732 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:44:23 crc kubenswrapper[4991]: E0929 12:44:23.929262 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:44:28 crc kubenswrapper[4991]: I0929 12:44:28.961734 4991 scope.go:117] "RemoveContainer" containerID="ddcbd6060e2250a8fc8e7991d564c49d2f8d0df201ded0777c5c6dbd5857d1a3" Sep 29 12:44:28 crc kubenswrapper[4991]: I0929 12:44:28.988201 4991 scope.go:117] "RemoveContainer" containerID="bffb9a0973a1856738fedd03ffb5f8b5dfc5e4019d464da1a779f40762156e7b" Sep 29 12:44:35 crc kubenswrapper[4991]: I0929 12:44:35.925750 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:44:35 crc kubenswrapper[4991]: E0929 12:44:35.927411 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:44:49 crc kubenswrapper[4991]: I0929 12:44:49.927306 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:44:49 crc kubenswrapper[4991]: E0929 12:44:49.927971 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.178053 4991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7"] Sep 29 12:45:00 crc kubenswrapper[4991]: E0929 12:45:00.179277 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="extract-content" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179298 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="extract-content" Sep 29 12:45:00 crc kubenswrapper[4991]: E0929 12:45:00.179319 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="registry-server" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179330 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="registry-server" Sep 29 12:45:00 crc kubenswrapper[4991]: E0929 12:45:00.179367 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="gather" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179376 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="gather" Sep 29 12:45:00 crc kubenswrapper[4991]: E0929 12:45:00.179401 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="extract-utilities" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179410 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="extract-utilities" Sep 29 12:45:00 crc kubenswrapper[4991]: E0929 12:45:00.179437 4991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="copy" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179445 4991 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="copy" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179739 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="copy" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179765 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f778d-df38-426a-9ed6-86f2b6e9976c" containerName="gather" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.179789 4991 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34045b5-6ef5-4370-b9bb-32829ebf182a" containerName="registry-server" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.180992 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.191169 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7"] Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.199049 4991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.199057 4991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.325568 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9b2f\" (UniqueName: \"kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.325861 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.326120 4991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.428656 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.428752 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.428844 4991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9b2f\" (UniqueName: \"kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.429669 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.436264 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.448189 4991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9b2f\" (UniqueName: \"kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f\") pod \"collect-profiles-29319165-9vmw7\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:00 crc kubenswrapper[4991]: I0929 12:45:00.512582 4991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:01 crc kubenswrapper[4991]: I0929 12:45:01.037043 4991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7"] Sep 29 12:45:01 crc kubenswrapper[4991]: I0929 12:45:01.970364 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" event={"ID":"51e186e7-5dce-4602-b917-c071fef155b4","Type":"ContainerStarted","Data":"97936ef5b26ea466d75ffe7b03885abaf8a330684fd8dfcba8f6e65a117835ae"} Sep 29 12:45:01 crc kubenswrapper[4991]: I0929 12:45:01.971023 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" event={"ID":"51e186e7-5dce-4602-b917-c071fef155b4","Type":"ContainerStarted","Data":"400d68cb23753425e503fef84bd4d2b1930cc311b4d539d991b4d79e9b31f083"} Sep 29 12:45:02 crc kubenswrapper[4991]: I0929 12:45:02.021986 4991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" podStartSLOduration=2.021925225 podStartE2EDuration="2.021925225s" podCreationTimestamp="2025-09-29 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 12:45:01.991608673 +0000 UTC m=+11237.847536701" watchObservedRunningTime="2025-09-29 12:45:02.021925225 +0000 UTC m=+11237.877853273" Sep 29 12:45:03 crc kubenswrapper[4991]: I0929 12:45:03.999022 4991 generic.go:334] "Generic (PLEG): container finished" podID="51e186e7-5dce-4602-b917-c071fef155b4" containerID="97936ef5b26ea466d75ffe7b03885abaf8a330684fd8dfcba8f6e65a117835ae" exitCode=0 Sep 29 12:45:03 crc kubenswrapper[4991]: I0929 12:45:03.999071 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" event={"ID":"51e186e7-5dce-4602-b917-c071fef155b4","Type":"ContainerDied","Data":"97936ef5b26ea466d75ffe7b03885abaf8a330684fd8dfcba8f6e65a117835ae"} Sep 29 12:45:04 crc kubenswrapper[4991]: I0929 12:45:04.941725 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:45:04 crc kubenswrapper[4991]: E0929 12:45:04.941985 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.419729 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.566669 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume\") pod \"51e186e7-5dce-4602-b917-c071fef155b4\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.566892 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume\") pod \"51e186e7-5dce-4602-b917-c071fef155b4\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.566969 4991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9b2f\" (UniqueName: \"kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f\") pod \"51e186e7-5dce-4602-b917-c071fef155b4\" (UID: \"51e186e7-5dce-4602-b917-c071fef155b4\") " Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.568322 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "51e186e7-5dce-4602-b917-c071fef155b4" (UID: "51e186e7-5dce-4602-b917-c071fef155b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.575594 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f" (OuterVolumeSpecName: "kube-api-access-x9b2f") pod "51e186e7-5dce-4602-b917-c071fef155b4" (UID: "51e186e7-5dce-4602-b917-c071fef155b4"). InnerVolumeSpecName "kube-api-access-x9b2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.576743 4991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51e186e7-5dce-4602-b917-c071fef155b4" (UID: "51e186e7-5dce-4602-b917-c071fef155b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.670492 4991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9b2f\" (UniqueName: \"kubernetes.io/projected/51e186e7-5dce-4602-b917-c071fef155b4-kube-api-access-x9b2f\") on node \"crc\" DevicePath \"\"" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.670586 4991 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51e186e7-5dce-4602-b917-c071fef155b4-config-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:45:05 crc kubenswrapper[4991]: I0929 12:45:05.670604 4991 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51e186e7-5dce-4602-b917-c071fef155b4-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.042573 4991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" event={"ID":"51e186e7-5dce-4602-b917-c071fef155b4","Type":"ContainerDied","Data":"400d68cb23753425e503fef84bd4d2b1930cc311b4d539d991b4d79e9b31f083"} Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.042633 4991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400d68cb23753425e503fef84bd4d2b1930cc311b4d539d991b4d79e9b31f083" Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.042754 4991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29319165-9vmw7" Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.091210 4991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w"] Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.105435 4991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29319120-w5k9w"] Sep 29 12:45:06 crc kubenswrapper[4991]: I0929 12:45:06.940368 4991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c8a25f-3b4b-4750-b199-ea2c15bcd236" path="/var/lib/kubelet/pods/24c8a25f-3b4b-4750-b199-ea2c15bcd236/volumes" Sep 29 12:45:15 crc kubenswrapper[4991]: I0929 12:45:15.927651 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:45:15 crc kubenswrapper[4991]: E0929 12:45:15.928920 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:45:28 crc kubenswrapper[4991]: I0929 12:45:28.927242 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:45:28 crc kubenswrapper[4991]: E0929 12:45:28.928081 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:45:29 crc kubenswrapper[4991]: I0929 12:45:29.079428 4991 scope.go:117] "RemoveContainer" containerID="a846b64e36801ce73ea6e55bd503308263efd9464807f0e6e9802ff0c1f1ef32" Sep 29 12:45:29 crc kubenswrapper[4991]: I0929 12:45:29.108404 4991 scope.go:117] "RemoveContainer" containerID="e2d76711d72dd80abb9c34d462850c919c0aa473b14700af182b41e1642be5f6" Sep 29 12:45:41 crc kubenswrapper[4991]: I0929 12:45:41.927265 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:45:41 crc kubenswrapper[4991]: E0929 12:45:41.928866 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:45:53 crc kubenswrapper[4991]: I0929 12:45:53.926569 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:45:53 crc kubenswrapper[4991]: E0929 12:45:53.927595 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d" Sep 29 12:46:07 crc kubenswrapper[4991]: I0929 12:46:07.926470 4991 scope.go:117] "RemoveContainer" containerID="fd9d203cd421ade519c3ca5da374c6c33cbae2dc823d613f28bd50e41d70a450" Sep 29 12:46:07 crc kubenswrapper[4991]: E0929 12:46:07.927413 4991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgbqk_openshift-machine-config-operator(88ab660a-6f01-4538-946f-38cdadd0b64d)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgbqk" podUID="88ab660a-6f01-4538-946f-38cdadd0b64d"